Archive for the 'programming' Category

ePrints Author ID

August 28th, 2009 | Category: eprints,opensource,programming,university,web

One of the things I’ve been working on over the last month is the ability to create distinct and unique author identifiers for ePrints. ePrint’s is a really awesome Perl based repository that the University uses to handle its research papers but whilst it is great at handling ePrints, documents, users and a whole host of other things it really falls over when you try to treat authors as individuals.

Read more

2 comments

Putting Token Login to work

So a few weeks ago I released JAuthTools 1.5.4 which features Token Login. Token Login was created to solve the need to generate a secure token that you can use for automatic login, for example with stuff like newsletters. Today I’m going to show you how you can write something simple with Token Login to handle automatic login with tokens in a unique problem case.
Read more

14 comments

Boycott Experts Exchange

February 21st, 2009 | Category: programming,search,web

Have you ever done a search for a problem you’ve had only to see tantalizingly something that looks like exactly the answer you wanted before painfully realising that it’s on Experts Exchange and the page you have just clicked on says it has the answer but you don’t have an account. Sure you could sign up for their free trial for 30 days and you might even find the answer if you are lucky but what happens next time? It’s like a drug dealer: the first hit is free, but you pay for everything from then on.

Now the original design of Experts Exchange wasn’t too bad. You could ask questions if you had enough points. You could also assign points to different questions increasing in value for importance I guess. You acquired points by either paying or by successfully answering questions. The thing that annoyed me was that if you weren’t the person that was nominated as the one who answered it you got no tangible credit for your contribution even if it helps or even if the correct answer was actually wrong or perhaps not the best response.

But obviously at this point they feel that they have enough knowledge to justify not only spamming their pages with tonnes of ads but also starting to force people to pay for even more. And be aide they’ve been around for a while and have had a good reputation they’re using this plus close keyword matches on the question to continue to drive traffic.

So now with Google’s Search Wiki, we can fight back against Experts Exchange and it’s pointless entries in Google’s index. All you need to do is be logged in and when you see an Experts Exchange result in your Google search make sure you delete it from your results. My belief is that if we get enough people to blacklist and delete those entries, Google will take note and eventually lower the rank of the entries and we’ll stop seeing their results.

2 comments

Today: 05-Nov-2008: Kerberos and Joomla! 1.6’s Backup system

November 05th, 2008 | Category: joomla,kerberos,opensource,programming,today

Today had a lackadaisical start with me working on getting Dawn of War:Winter Assault to work on my Mac (once it was fully patched seems to have started working, yay for no copy protection!) after doing a whole heap of disk swapping last night to get the base installed only to see it complain it couldn’t find a CD/DVD drive. After I installed the 1.50 patch it asked me if I wanted to start and for the first time it actually started the game without issues. I managed to load it up and play a quick game and fielded a call from my Mum before heading to work. I also added some projects to my list and categorised items, now on the todo list: an automated login key generator for Joomla! and a component to compliment the ban IP/address plugin. Now all I need is time!

The Kerberos keys that I had asked to be remade were ready for me by the time I got there. It took a bit of time to rebuild the different keytab files to support the vhost environment (need to merge the respective keytab files) but once that was done everything was working. Well, mostly working. Firefox on my Mac worked fine, Firefox on the Windows desktops I tried worked when they were configured (see http://grolmsnet.de/kerbtut/firefox.html for information on what you need to do to get Firefox to do negotiate), IE on most of the desktops worked fine however some installations weren’t getting SSO, all of the Citrix servers seem not to pass through authentication (they end up going in a weird loop where IE appears to keep loading the page) and Safari on my Mac doesn’t seem to want to play the game either. Perhaps I’ll sort that out over the next week or so but that consumed a reasonable amount of time going through and checking different IE versions and if they worked. The only machine not to play the game seems to be Firefox on my Linux desktop (it should be working) so I’ll have a look at the ones that don’t work and why they don’t want to work. For the Windows boxes I have the feeling that the Netware client is causing issues (which would explain Citrix) so hopefully when our network eradicates Novell we’ll be fine.

And that leads us to the afternoon’s fun of building Joomla! 1.6’s backup system. I’ve managed to get the system to export the sample database, reimport it and then delete the files afterwards so I’ve moved onto much larger goals. I’ve taken one of our internal websites and I’m trying to get it to important. Suffice to say that it has enough data to cause an issue with the system. For data loading I’m using a heavily modified version of Alexey Ozerov’s “BigDump” script, which has been used in the past in a less modified form for the Joomla! 1.5 migrator. It is slowly being converted to use the new Tasks system in 1.6 which is another concept borrowed from the 1.0 migrator. The Tasks system in 1.6 has two items: a task set which is a container for individual tasks. So considering backups, one task set might be a full backup run of the site with individual tasks being an SQL backup, a file backup (tar archive perhaps?) and maybe copying that to a remote FTP site or similar. So the one task set would have an “SQL backup” task and a “file backup” task. Extension package installation may do a similar item as well splitting the install into different parts.

A new part of this is the data load system that provides functionality to read and load data files, at the moment only supporting SQL but I’m hoping I’ll be able to create a CSV one as well some luck, again probably reusing Alexey’s code in part here as well. I’m mostly through building parts of this system though I’m experiencing a strange issue with my sample data (hence why the updates haven’t been committed to J!’s SVN repository today) where it loads the file up through to almost 2000 queries and seems to stop suddenly. I’m not quite sure whats going on but I’m happy enough that the task system is picking up and storing values for it to progress as far as it does.

Another successful day spent on my Mac as well, NetBeans doesn’t seem to want to look at my project any more crashing instead of loading it which is disappointing but I’ll work that out another day. And now its time to enjoy some Dawn of War.

No comments

Bash to Phing to make

February 10th, 2008 | Category: programming

So most of my day I write code which is cool. I usually don’t have to redeploy things because I end up testing in the same environment that I work on it isn’t until I start a release cycle that I need to start to package things up that I really utilise build scripts. Initially I used to just export from SVN and tar things up by hand. I had also recently heard a few proponents to suggest Phing, a PHP based build tool that uses XML files similar to ANT, as something to use with my build scripts. The scripting that I had done was all in Bash, a rather flexible shell environment but lacking in a few features that a dedicated build tool gives me.

What triggered a lot of this is a set of rather strange bash scripts that form the build tool for Joomla!. It handles building the packages, exporting the packages and building the patch packages (diff of new files and updated files from the old release). Wilco always says that we should use Phing instead because it has all of these features. So this project called JAuthTools that I work on started becoming big and it was a pain to build things for it by hand so I decided that now was the time to get into Phing.

Phing
Phing (http://phing.info/) is as stated before a PHP based build tool. Its web site says it can do anything you can do with a traditional build system, which is really kind of your base expectation when you think about it. Why would you replace your build system with something that provides you less? Phing comes with by default all of the nice things you’d want from a build tool, tasks to do things, output things, make directories, move and copy things, call other tasks and a few other nice things.

Problem is that what I’m after is something to build tarballs, and these are marked as optional tasks for the system. This has some cool things like DbDeploy, which connects to a special DB and builds SQL delta files (version control for your DB), coverage analysis, PDO SQL executor, PHPUnit and my personal favourites, tasks for Zip, Tar and Subversion.

So I went to install Phing on my work box. Its a Linux machine so I decided that I wanted to avoid the PEAR path because that is just as good as installing things all over my system. Phing lived happily in my personal bin directory until I realised that to get SVN working I needed to get this thing from PEAR. I decided that I might as well fight PEAR (and my work proxy for that matter) and install it that way. It only took a few attempts to get PEAR to work through my work proxy, it appears that between the last time I was fighting it they fixed it up which is nice. CPAN is another one that is a pain as well because it wants to pull things left and right and my proxy decides that half of the files it wants are too big and it should ask for them after 5pm instead or use a special download request system. But I digress, Phing ends up being mostly unhappily installed via PEAR with a few complaints (I said, “yes download your dependencies” which bombed out when it figured out that its dependency was still marked unstable and refused to install it (so why err bother? why not ask me! I want to install it, I even said so, that is what “alldpes” means right?)). So now I have a SVN capable Phing! Shiny!

So I go off and build all sorts of things and suddenly the installer rejects the package I just built with Phing complaining its an invalid zip file. So I ended up by working around this with a Bash script that basically rebuilt the ZIP archive after Phing was done with its business. So I progress from Bash to…err…Bash via Phing! This is on a PHP 5.2.5 latest build so I’m not sure where the fault for this lays, but if I end up using the old tool then as far as I’m concerned it isn’t a replacement for my build system.

The next thing that I ended up finding is that the nested dependencies appeared not to be fully supported by the tool, which didn’t particularly bother me however as you’ll read later Make supported this perfectly fine. So I ended up having to alter my build file to handle Subversion check outs better because it would check it out for each sub dependencies. So I would have a top level one and then sub-dependencies so that each individual item wouldn’t have to do it themselves, especially for the targets that matched the same repository. I ended up just moving this into a common parent dependency and going from there.

And then I wanted to do some work on my Mac at home. I have a slightly out of date PHP, 5.2.0, which appears to have caused issues. This time I went straight to using PEAR to install things and again this caused issues, but given I was expecting some hassles I managed to get things working. I also don’t have a nasty proxy or firewall so that makes life easier for me as well when installing these things. I tried out my first build script and it spewed out a horrible amount of errors. For some reason on this combination SVN doesn’t want to work which dies silently which then causes the rest of the script to fail because it expects the SVN task to succeed and given I’m using a build tool to ensure that errors get properly trapped it doesn’t bail out or check that directories exist like I could do with a shell script.

Whilst this all sounds bad I did however like one feature that allowed me to control what the zip task (or selected other tasks) included and excluded which made things easier for building packages and controlling operations that are slightly harder in bash or make.

Make
Today I started off making a few new scripts on my Mac to replace the non functional Phing scripts. It wasn’t too hard to go from Phing to Make but I ended up re-expanding out my SVN scripts now that I have a system where the dependencies work better. Strangely enough my zip files also started building properly which was nice to have. But this got me thinking: why do we reinvent the wheel?

Sure it is nice to have a fancy XML file that takes a lot of control away, but at the end of the day when I compare what I wrote to get SVN to export in Phing and what I have to do with Make or BASH, on sheer number of characters I end up finding that Phing loses again.

But here is the really simple thing: Phing’s SVN support is a wrapper around the regular command line SVN client. The advantage of using Make or BASH is their portability for myself. My primary environments are Mac and Linux, and when I have been on Windows for extended periods of time I ended up installing Cygwin. So at the end of the day my build system is portable between systems. Additionally the “Gotcha’s” page for Phing shows that it isn’t really a walk in the park to get it to work on Windows either. The question is that do you really want to play with your build system each time you want to change platforms or do you want to go and get up and go without too many issues.

Make and BASH are both staples of the Unix environment and have had years of testing. Whilst it is nice to have something with a whole heap of new features, the fact is that some of the basic functionality doesn’t work properly or is a pain to get working across platforms: I ended up replicating these features anyway.

No comments