Archive for the 'opensource' Category
Rotating Apache log files on Mac OS
Recently I went through setting up a new Mac from scratch instead of porting it from another machine. As part of this process I set up Apache with a bunch of log files for various VHost’s I’m working on for a side project I’m playing with. This means plenty of logs files. In the past I’ve generally not worried about doing anything with the log files, I just let them grow and if they got too big, nuked them by hand. However this time round I figured I’d get them rotated properly so that I could have clean logs each day that I’m working.
Read more
OSDC2009 Presentations
My OSDC2009 Presentations are now up on both the OSDC Website (see http://2009.osdc.com.au/sam-moffatt) and also on my University’s ePrints site. You can check out the individual papers and their associated presentations on their respective ePrints pages:
No commentsFree Git/SVN hosting providers
During my recent presentation at the Joomla! Developers Conference in New York, I emphasised the importance of using a version control system (in particular SVN) to maintain copies of everything. From PSD versions of designs and templates to more traditional items like source code, some form of a version control system is what you need to do. If you’re working in an organisation where you need to collaborate with more than one person then version control tools provide valuable collaboration and consistency controls. Another interesting aspect of hosted repositories is backups – not only do these systems keep different versions of your data but if you use them properly you’ve also got a backup for if your machine gets toasted. During my presentation I used CVSDude as an example. CVSDude is a local Australian company run out of Brisbane which is why they stick in mind but there are other tools.
Firstly I’ll do a quick overview of the two main systems being promoted: Git and SVN. SVN, or Subversion, was developed as a newer version of CVS. Subversion has been referred as a “code wiki” which I feel is a great explanation. It keeps versions of files around for you and ensures you’ve got everything in hand. Subversion is a centralised source control system, so you need a central server to run it on for different people (you can also run it on your own machine as well but there is only one point of truth). Git is a distributed version control system where basically every working copy has a full copy of the tree. This is great for pure source projects and a few other sorts where having everything on disk isn’t too bad but doesn’t work well when you have larger repositories and files that you might want to share. Git doesn’t permit partial checkouts as well so you get the entire repository or nothing. This can work in your favour (you can do stuff like commits locally and do history checking locally) or against you (it might be a chunky repository). Git repositories typically are more compressed and smaller than SVN however Git’s Windows tools leave a lot to be desired. If you’re working with people who aren’t technical, Git can be painful and I’d suggest Subversion. Both are a learning curve but Subverion’s is easier and the centralised control is useful for most projects.
Provider A: GitHub
GitHub are one of the most popular Git hosting sites out there for open source projects as well as being a commercial hosting provider with “private” repositories. For people who are doing open source projects and interested in using Git, GitHub with its 300MB disk space (expandable for open source projects) and unlimited public repositories and collaborators is perhaps the most powerful option in the Git sphere. It is also all backed up as well, like most of the options, so you’ve got some peace of mind there. GitHub have personal and business branches offering different “private” hosting options starting at five private repositories consuming 600MB with one additional collaborator for $7 per month. The plans go somewhat incrementally up from there adding disk space, private repositories and private collaborators. GitHub has a wiki as an option as some point as well as a private/public pastebin service.
Check out GitHub’s pricing at http://github.com/plans
Provider B: Unfuddle
Unfuddle is something I’ve just come across after my presentation on a recommendation from someone at the conference. Unfuddle offers both Git and SVN support as well as a form of a wiki in what appears to be “Notebook pages”. It limits you to one active project but features RSS and iCal support as well as bug tracking, milestones and in the free version support for two people to collaborate. The free version offers 200MB and more expensive versions have file attachments, SSL and time tracking as well as more disk space, active and archived projects, people and unlimited “notebook pages”.
Check out Unfuddle’s pricing at http://unfuddle.com/about/tour/plans
Provider C: CVSDude
CVSDude are a much older group who offered initially CVS hosting but recently handle Subversion. They support Trac which a popular development support tool that integrates with Subversion providing milestone support, issue tracking and integration (e.g. you can close tickets from SVN) and a wiki. CVSDude appears to be slightly below par with GitHub with their cheapest plan offering 500MB of storage, one project and two users (as opposed to five projects and 600MB from GitHub). They also appear to offer Bugzilla, a popular bug tracking software (perhaps they didn’t like Trac’s version or have disabled it?) as well as DAV storage whatever that means. CVSDude does emphasise that they have better backup facilities than others offer plus the Trac/Bugzilla instance provides more functionality than GitHub does. CVSDude annoyingly hides a lot of information behind marketing so you need to do a lot of reading to work out what they’re really selling for each option.
Their overview page serves as an entry point for finding more information, check it out at http://cvsdude.com/hosting-products.html
Provider D: GForge Group (and JoomlaCode)
GForge Group appear to offer free one project per person hosting on their stack with a 75MB space offering. You can add up to five people to the project and it is a private project. GForge offers a wiki, mailing lists, forum, file release system, tracker and a few other tools as well. It has the interesting caveat that if you don’t log in for 30 days your project will be permanently deleted. GForge are selling a stand-alone product more than anything so they’re encouraging you to head that way with that however as an item it is an interesting. It looks like you can add to it however their store link didn’t appear to be working properly. JoomlaCode is powered by GForge AS and offers many of the same features (version control currently limited to SVN though GForge AS supports CVS and GIT amongst other things). JoomlaCode’s hosting is free for GPL non-commercial Joomla! related projects and is offered as a service to the community.
Check out http://gforge.com/gf/register/?action=ProjectAdd for more details.
Provider E: PixelNovel
PixelNovel is another host I’ve just seen today that offer a tool for Adobe Photoshop that integrates Subversion straight into the tool. This means that you don’t need to jump out of the system to handle it and it also does previews of the Photoshop files for you for when you’re going back in time. The standalone Photoshop plugin will work with seemingly any Subversion repository and costs around $60 per licence though it would appear you can pick up a free copy with a PixelNovel account which offers 100MB for nothing and goes up from there.
Check out their pricing and plans at http://pixelnovel.com/pricing
As with everything before you hand over cash, code or templates read the fine print. Though it doesn’t say it outright, PixelNovel for example will delete your account after two months of inactivity or lack of bill payment and GForge have similar albeit much more upfront text. Some services offer SLA’s on performance and uptime guarantees where as others don’t whilst some mention backups in a very definite time frame (I think CVSDude offers 10 minute backups) and others mention that they do it without many details. Some also offer more tools than the others and PixelNovel has some specialised tools targeted at designers particularly. As with everything the devil is in the detail so good luck checking things out and make a decision based on your own personal needs.
4 commentsePrints Author ID
One of the things I’ve been working on over the last month is the ability to create distinct and unique author identifiers for ePrints. ePrint’s is a really awesome Perl based repository that the University uses to handle its research papers but whilst it is great at handling ePrints, documents, users and a whole host of other things it really falls over when you try to treat authors as individuals.
2 commentsToday: 6-Nov-2008: Barcodes and Backups!
As it seems a few of my mornings are starting I played a quick albeit one sided game of Dawn of War with my house mate to test that the network play was working between my Mac (using CrossOver Games) and his PC (Windows XP). Suffice to say everything worked fine network wise, my Mac stil has some visual issues but I’m not too phased about that at the moment. So that was an interesting start to the day and when he returns from work it’ll be an interesting end as well.
Work again is fun as always. We’re working on solving various election problems and getting envelopes. Our printers came back stating that they didn’t want to do Code 128 for the barcodes because its too hard preferring Code 39 instead. As the day progressed it turned out that we’re going to use Code 128 more for space issues because we’ve got that much junk going out on the envelopes. This saga has been occurring over a few days, I really want to get a few proofs done before we actually get the batch done from them first to ensure that everything is good but we’ll see. Testing on the system we’re going to be using to track the election on has also begun to ensure that the system can in fact handle items properly, we had a word of warning from one of the newer guys from the regions that it had issues and that was with nowhere near the same workload. So we’re hoping we can test the system out with a few VM’s emulating the system for multiple data entry. There have been issues with the system in question in the past (in fact the system is being replaced) so hopefully this won’t be a time when it does have issues otherwise we’re going to have large numbers of problems.
The bulk of my day was spent working on the restore framework for Joomla! 1.6, or more accurately the new JDataLoad system and the JLoaderSql adapter. The data load system, as its name suggests, loads data into the Joomla! database from a data source. In this case I’m looking at the SQL files which in my sample data is actually one of the 1.5 sites that I’m an administrator over at work. Its relatively small in the grand scheme of things with only a few meg and around 9000 queries. So far its been sufficient to find a few issues, one being dropping a table before a task yield which caused a missing table error from J! (put in a simple patch for that, if the last query was a drop, go to the next query in the hope that its a create) and another was a minor typo error which caused some multiline strings to be processed incorrectly when they were on a yield boundary again as well. But all in all its working well even importing data faster than MySQL Query Browser was (to its fairness it highlights each query as it goes) in my test runs. Its now committed to trunk and when I get a chance I’ll write up something about it and put it somewhere
Extra fun today came from trying to write up business cases for the projects I want to work on in the next year or so until I have to write it up again (fun, yeah!). Initially my boss (who is great) thought of me and tried to convert the files from Excel into a more Open format so that I can get at it on my Mac and Linux box. Somewhere during the conversion however the fields got trimmed and data lost, so I offered to edit the document in Excel directly using our Windows only document management system logging in via our Citrix services. The system, OpenText’s DM, isn’t too bad for the most part and does the job well and today I found no fault with it. Today was the day when Microsoft’s tools decided they wanted to misbehave.
Earlier in the week my boss had emailed me a doclink to the document stored in DM. A doclink is a small text file with the document number in it which basically triggers the system to load the specified document, something that usually works quite well. However Outlook, due to various configuration changes, decided that it didn’t want to start for some reason even though earlier in the week it was working perfectly fine. After Outlook repeatedly informing me it wanted to recreate my profile and then informing me that it couldn’t contact my Exchange server and offering to allow me to work offline which consequently failed due to a lack of a profile, I ended up using Mac OS X’s built in “Mail” application (yes, the email application is called “Mail”) to get at my email to find the document number and open it. Usefully enough this is done through the IMAP interface on Exchange and worked well and doesn’t suffer from some of the other issues that the Outlook clients have, such as the address book caching which caches the old Lotus Notes addresses instead of their newer Exchange ones resulting in emails going to the wrong place. Yay Outlook! Suffice to say I found the file and made the changes that I needed to before accosting our Exchange administrator who had returned to see if he could fix the issue – which he did mind you after some trial and error.
The last little item I looked into was building a system to version the content from Joomla! back into our document management system. The new document management officer assures me that we can do it and has even proposed a nifty way of importing the data into the system. It looks like one of the products we have, KoFax, will help us by allow us to generate XML files which specify the documents that we’re creating and the different versions. If we can get this to work it will be really awesome as it’ll mean that our website is in part integrating back into our document management system without hacking into the database! I’ve still got to build it and work out where we want to target the extension, but suffice to say its on my project list for next year.
No commentsToday: 05-Nov-2008: Kerberos and Joomla! 1.6’s Backup system
Today had a lackadaisical start with me working on getting Dawn of War:Winter Assault to work on my Mac (once it was fully patched seems to have started working, yay for no copy protection!) after doing a whole heap of disk swapping last night to get the base installed only to see it complain it couldn’t find a CD/DVD drive. After I installed the 1.50 patch it asked me if I wanted to start and for the first time it actually started the game without issues. I managed to load it up and play a quick game and fielded a call from my Mum before heading to work. I also added some projects to my list and categorised items, now on the todo list: an automated login key generator for Joomla! and a component to compliment the ban IP/address plugin. Now all I need is time!
The Kerberos keys that I had asked to be remade were ready for me by the time I got there. It took a bit of time to rebuild the different keytab files to support the vhost environment (need to merge the respective keytab files) but once that was done everything was working. Well, mostly working. Firefox on my Mac worked fine, Firefox on the Windows desktops I tried worked when they were configured (see http://grolmsnet.de/kerbtut/firefox.html for information on what you need to do to get Firefox to do negotiate), IE on most of the desktops worked fine however some installations weren’t getting SSO, all of the Citrix servers seem not to pass through authentication (they end up going in a weird loop where IE appears to keep loading the page) and Safari on my Mac doesn’t seem to want to play the game either. Perhaps I’ll sort that out over the next week or so but that consumed a reasonable amount of time going through and checking different IE versions and if they worked. The only machine not to play the game seems to be Firefox on my Linux desktop (it should be working) so I’ll have a look at the ones that don’t work and why they don’t want to work. For the Windows boxes I have the feeling that the Netware client is causing issues (which would explain Citrix) so hopefully when our network eradicates Novell we’ll be fine.
And that leads us to the afternoon’s fun of building Joomla! 1.6’s backup system. I’ve managed to get the system to export the sample database, reimport it and then delete the files afterwards so I’ve moved onto much larger goals. I’ve taken one of our internal websites and I’m trying to get it to important. Suffice to say that it has enough data to cause an issue with the system. For data loading I’m using a heavily modified version of Alexey Ozerov’s “BigDump” script, which has been used in the past in a less modified form for the Joomla! 1.5 migrator. It is slowly being converted to use the new Tasks system in 1.6 which is another concept borrowed from the 1.0 migrator. The Tasks system in 1.6 has two items: a task set which is a container for individual tasks. So considering backups, one task set might be a full backup run of the site with individual tasks being an SQL backup, a file backup (tar archive perhaps?) and maybe copying that to a remote FTP site or similar. So the one task set would have an “SQL backup” task and a “file backup” task. Extension package installation may do a similar item as well splitting the install into different parts.
A new part of this is the data load system that provides functionality to read and load data files, at the moment only supporting SQL but I’m hoping I’ll be able to create a CSV one as well some luck, again probably reusing Alexey’s code in part here as well. I’m mostly through building parts of this system though I’m experiencing a strange issue with my sample data (hence why the updates haven’t been committed to J!’s SVN repository today) where it loads the file up through to almost 2000 queries and seems to stop suddenly. I’m not quite sure whats going on but I’m happy enough that the task system is picking up and storing values for it to progress as far as it does.
Another successful day spent on my Mac as well, NetBeans doesn’t seem to want to look at my project any more crashing instead of loading it which is disappointing but I’ll work that out another day. And now its time to enjoy some Dawn of War.
No commentsToday: 04-Nov-2008: Fun with Kerberos
Today was a mostly ordinary day, though the day started with me buying Red Alert 3, so that wasn’t too bad – yay! Australia! A week behind the rest of the world! I could have pirated the game and had it faster and cheaper, perhaps even finished! But I digress, it was an ordinary day.
Today is Melbourne Cup day, being the first Tuesday of November, so we had a luncheon of sorts and a drawing for the horses. Didn’t win, the food was good, I’m $10 poorer and such is life.
I’ve been spending more time at work using my Mac as a primary machine. Since I’ve moved to Exchange from Domino (or Outlook from Notes), I’ve gotten Evolution on Linux mostly working (with the exception that it doesn’t automatically look up names for emails which is tedious) and Apple’s Mail and Address Book both playing nicely with Exchange. I do miss the fact that I had Notes on my Linux desktop and things mostly worked albeit slowly and consuming large amounts of memory, but it worked with all of the features available normally. Mail’s ability to due autocompletion is what is drawing me back to it as a client, which when you start writing emails is actually more useful than you would think. Its still not up to par with the Notes autocomplete which was quite cool and a lot more advanced than either Mail’s or Outlook’s (I get Outlook via Citrix).
I’ve also been trying out NetBean’s PHP Early Access through a nightly build (has the ability to create PHP projects from existing sources) and I’m impressed with it. I tried it out because I wanted to try out debugging with my PHP instance and the dated version of Eclipse I had (3.2) seems to have issues – more than likely my fault – and I don’t want to waste time on trying to fix something. NetBean’s installed and worked almost instantly, however it took me a while to find where I could change the params to get J! to route items properly. I managed to work out the bug that I was having without too much issue. I knew what it was but not where it was: turned out to be exactly what I thought, an assignment operator used instead of the append operator. The Subversion support seems to be a bit off and doesn’t work yet, so I’m not quite ready to ditch Eclipse yet – but I’ll try with later versions to see what I get.
I had a chat with the principal (we have principal, manager, director, CEO as our chain of command) about the projects that I’m doing and the ones I’m interested in so I’ll have to do some paperwork and business cases for the new projects and justify items. We’ve recently got a new manager who is trying to find where everything is so part of this is explaining everything so that he can get a grasp of the way the system works.
Then I spent the majority of the afternoon with one of the ITS guys working through how our Citrix boxes work with Flex profiles and the mandatory profiles filling in the gaps in his knowledge and how different parts of the system and why items might break or behave in a particular way. I think he’s worked out how it works and he’s even figured out why a few issues are happening. So nothing exciting but useful.
And finally I had fun with Kerberos. I built the Kerberos module on the SLES10 server, installed it, restarted Apache and tried to get it to work. On my Mac both Safari and Firefox requested a username and password instead of using a Kerberos token and IE6 in my Citrix session seemed to just go in a weird infinite loop. I slowly worked through my entire Kerberos configuration on the server until I got to looking at the keys. It turns out that the keys were created with the wrong virtual host name for the server which is causing the issues. The keys for the real server name actually worked fine when I got around to testing them which proves that everything will work once I get the keys. The last part is a fix to the Citrix system which for some reason think that the intranet site is actually on the internet, but I’m assured that this should be easy to achieve. Getting Kerberos up and running was pretty easy ignoring the faulty keys compared with some of the nightmares I’ve had getting items to play nicely together. I’ll probably add something to my guide (http://sammoffatt.com.au/jauthtools/Kerberos) on it, to help with items.
Who knows, I may have even figured this Kerberos thing out!
No commentsA subtle reminder on the dangers of non-free software
I’m a proponent of open source software and free software, and in some cases even non free software has its advantages so long as the source is available (e.g. commercial GPL) but someone gave me a link the other day that reminded me why a lot of closed systems have issues:
http://forums.cubecart.com/index.php?showtopic=33714&pid=151145&st=0entry151145
Their server went down which means you couldn’t (temporarily) update your site. More of a disservice. Similar to Microsoft Windows Genuine Advantage issues legitimate people have had with that “service”.
No commentsJAuthTools Update
Today I’ve been working on some new stuff for Joomla! to enable SSO between disparate Joomla! instances. I’ve tested it on Joomla! 1.0 and 1.5 in legacy mode. I’ll do some more work later to get it working with Joomla! 1.5 in native mode and to better integrate with JAuthTools for 1.5 (to utilise the SSO system that I’ve written for 1.5). If you’re interested, check out the JAuthTools SVN here: http://joomlacode.org/svn/jauthtools/sso/joomla10x/soapsso. If you check it out you can use the install from directory feature to install it into your test sites. I’ll have packages up in the next few days.
I’m also looking at doing some work on the JAuthTools for 1.5 to improve support. There appears to be some issues so I want to get it back up and running as well as porting/merging some of the features from the LDAP SSI into 1.5 and the LDAP Authentication plugin. I’ll probably also do some update to docs on the wiki as well to reflect the new features.
Things on my todo list after I clean up my released JAuthTools plugins:
- Backlink Manager
- JAuthTools Manager
- JDiagnostics for 1.5
Today, 14-Jan-08: Query languages, LDAP, business intelligence and filesystems
I’m going to start regularly writing daily posts about what I did today and the things I found interesting, I’ll at least try anyway.
First up for today is a personal thing, I completed a rather largish Uni assignment today which reminded me of all of the pains that come with C++, but to follow that I returned to working on my filesystem in C, which is just more pain. I got a quick response back, and almost full marks (96%) so I’m happy for all of the time I put in to get it done and how its probably far more complicated than anything else that will be submitted (it used Boost Signals and a whole heap of other things that I don’t think will ever be taught in the subject for a long time). But hey, thats just Uni!
Today I finally managed to get Pentaho, some business intelligence (BI) software, to play nicely with Novell eDirectory’s LDAP interface. I must have missed the option, but Pentaho doesn’t seem to accept anonymous binding to the LDAP server, which means I need to bind as a user. By default our users funnily enough have less access than the anonymous account (which is actually a proxy account with full browse permissions). The solution was simple enough: we shunted our dummy Pentaho user into the same group as the anonymous proxy account and everything worked. So I’ve now got Pentaho using LDAP for authentication (yay!) and a MySQL database to get its role/group permissions. Funnily enough when its all said and done the documentation is pretty close to the mark.
But once I had that I don’t have an ability to manage the groups/roles within Pentaho, so I end up having to write some small PHP to manage that. Luckily I worked on a project a while back that I called “Joomla! Central Management for Users” which basically connected directly to MySQL databases of Joomla! installs and altered the users. I had originally built it with a plugin infrastructure in mind so that I could plug other stuff into it later. Starting this morning it only had a ‘connector’ for Joomla! 1.0 via MySQL and LDAP, now it has one for the Pentaho security tables. This means I can easily copy users from LDAP or Joomla! into Pentaho without too much issues and has a debugged user interface already. But wait theres more!
When I was originally developing the tool I wrote a query language for it. See, SQL is a great language for databases, but its a bit hard to apply in situations where you don’t quite need all of that power. So I wrote my own query language. Its quite simple it can validate simple attributes and allows for set operations within “Sites” (a site is a container for users and groups). So for example I want to see all of the users who are on our web site but not in our LDAP directory:
existsin “Web Sites” and not existsin “LDAP”
Primitive sure, but it because writing a large SQL expression for something simple. I hope to expand on it, but it already does what it needs to do for the time being.
So I’ve covered query languages, LDAP and BI! All I need now is the filesystem news. Today there was a whole heap of fan fare on Slashdot about the ZFS news from Apple, whilst thats cool and all (especially since I don’t mind Apple’s UI), I personally have my own filesystem that I’ve gotten back into to do some work on. It also happens to be a Uni assignment due on Friday! So I’ll be back to working on that and hopefully I’ll have it to a nice stage that I can do some lightening talks at linux.conf.au!
No comments