Shift8 Creative Graphic Design and Website Development

Web Development

Lithium Quick Tip: Displaying Dates

Posted by Tom on Fri, Jan 14 2011 09:34:00

So if you're using Lithium and MongoDB you can set in your model the $_schema property which will allow you to set data types for your fields. So things may look like this: 

protected $_schema = array(
  '_id' => array('type' => 'id'),
  'created' => array('type' => 'date')

With that, the "created" field will be of type "date" which will be a MongoDB date object. This date object isn't going to display properly when you got to print it back out in your view templates. You'll end up with some weird number value. The timestamp is actually within this date object that's returned under the "sec" key. So it would be $document->created->sec in your view template (assuming $document was your result from the database).

How about formatting to a pretty date? If you're coming from CakePHP or some other frameworks you may be spoiled and have a time or date helper. Lithium is pretty lean and leaves that as one of your responsibilities. Fortunately, alkemann has created some helpers that mimic some from CakePHP for Lithium. These include a Time helper. So in your view template, you can use:

echo $this->time->to('nice', $document->created->sec); 

That will output something more readable like, "Fri, Jan 14th 2011, 02:29" instead now. Don't have or want those helpers? You may be crazy, but you can also use:

echo date('Y-M-d h:i:s', $document->created->sec); 

However, I would definitely give those AL13 helpers a look, they are pretty nice. The important thing to remember here though is that when using MongoDB date objects, your timestamp is under this "sec" key. Of course, you can always store timestamp integers or string values, but then you're really crazy.

Agile Uploader 3

Posted by Tom on Sat, Jan 08 2011 12:50:00

I'm pleased announce, after a few days of really focused work, I've released a new version of Agile Uploader. For those who haven't seen it before, it's a Flash client side resize before upload, multiple file upload tool. Why another multiple file upload tool? Well, swfupload and flash file uploader and uploadify and plupload and all the others (too many to mention) work great and all (and believe me, their authors do a great job and it's not easy and the tools do work well), but they didn't quite do what I needed. First off, several of those I just mentioned don't have client side resizing for image file types. What I mean by that is Flash can scale and re-encode new jpeg files that can then be uploaded to your server rather than having your server do the resizing. Obviously this helps with hosting costs, site scalability, and for those using shared hosting. Second, Agile Uploader is fairly lean and extremely customizable. Agile Uploader allows you to customize how things look and function with regard to what type of files you allow and how you want it to resize images (if at all).

Agile Uploader ScreenshotAgile Uploader asynchronously resizes and encodes images so you can keep attaching more files while it's working. The tool has a bunch of other options as well as a tight communication with JavaScript. I wrote a jQuery plugin to accompany the tool, but you could write your own JavaScript should you choose (or alter the jQuery plugin). Flash will pass to a defined JavaScript function various event data as it does its job. In fact, you can control just about every facet of the upload tool right from JavaScript. The only thing you can't use JavaScript for is the "browse" or "attach" button, it's against ActionScript's security rules sadly. However, you can customize the button that appears in the swf by specifying the path to your own image(s). You can also send the form via JavaScript if you wanted to automatically or on some event (like a button or link click). 

Documentation for this new version is on the way, there's been a lot of changes since version 2.x. It's become a lot more organized and simplified now. One of the long awaited features includes the ability to select multiple files at the same time. While the tool always uploaded multiple files at the same time, the user had to click the button to browse for each file one at a time. So the user experience has improved quite a bit along with a few minor bug fixes. 

You can see more about this tool, and a working demo, on the Agile Uploader project page.

Redirect to Anywhere Using Closure with Lithium Router

Posted by Tom on Mon, Jan 03 2011 15:07:00

One of my New Year's resolutions is to post more to my blog. Even if it's short posts. That's fine. So to that effort, I'll leave you all with a little quick tip for the Lithium framework. If you want to redirect a request to an external server or any URL you can do so in one of two ways.

Redirect Controller Action
The first method is how you might do it with the CakePHP framework...Simply route all your URLs to a controller that has an action which will call $this->redirect() and pass the argument (the URL you wish to redirect to). That might look like this:

Router::connect('/about', array('controller' => 'Redirects', 'action' => 'go_to_url');

Then you'd need your "RedirectsController" to have a method something like this:

public function go_to_url($url="/") {

That's great and will work but there's actually a bit more involved. First off, you have to have some extra controller (which could handle all sorts of "legacy" issues if migrating from an old codebase, so that's fine if you want to do so) and that means a bit more code than necessary in most cases. You might also have to tell it not to use a template, etc. 

Router Closure
If you didn't know already, Lithium allows for closures in its Router::connect() method. It's very very nice. So here's what you can do instead: 

Router::connect('/about', array() function() {

That's all there is to it. With this method, you are also bypassing a good chunk of the framework, so it should perform a bit better too. You can get a little clever there and add some code to check if the URL exists and if not, re-direct to a 404 page on your server or something too. Go wild Tongue out

Li3_Access: A Lithium Access Control Class

Posted by Tom on Tue, Dec 21 2010 17:53:00

So of course me being me I'm working on a million projects all at the same time. Lately I've been trying to push a lot on my Lithium CMS, Minerva. As a byproduct of all that I am happy to release li3_access, an access control class for the Lithium framework. Don't get too excited, it won't handle your ACL the way you may be looking for out of the box, but you can definitely use it to handle your ACL/RBAC. It extends the adaptable class so expect adapters for RBAC in the future.

Not to go against the flow, there's actually a spec for an ACL system for Lithium. I built my library to fall in line with it so that hopefully there will be less wheel re-inventing. Not that it is a huge class or anything, but it was well thought out and it does come with test cases -- bonus! :)

So of course it doesn't really do much without an adapter and while I don't have a robust RBAC adapter that may use some sort of tree system from the database, I do have a "rules based" adapter that comes bundled with the library. 

You can basically think of this rules based adapter like validation. There's even a method within this adapter called "add" and it does exactly what the validation class does. It adds rules to check for access. If any come back false then it returns an array with some data; including a message explaining why access was denied and a redirect URL. You can then check multiple rules at once. It's very quick and easy to use. Many sites will be able to use this alone and won't require some larger database ACL system. This really involves no queries (well unless you needed data from the database to determine true or false).

Using this adapter you can get very detailed control over access rules; for example, you can lock out specific users during specific times of the day for specific content if you wanted. That's simply something you can't do with a traditional RBAC system. You would have to use your RBAC system and then add to that extra code to catch those special conditions. This allows you to neatly organize and check against your rules much like validation.

For more information and the code, check out li3_access on Github. Remember to keep your eyes open for new adapter in the future, or feel free to contribute one! Thanks, hope you enjoy.

My Ideal Development Setup

Posted by Tom on Mon, Dec 20 2010 10:07:00

I finally took the plunge and got a new laptop (with the kind help of my parents with a generous Christmas gift). It's been a few years really and I did need a new computer. What did I end up with? Well, I almost got a Macbook Pro (of course) being the good web developer (and graphic designer) that I am. Apple finally came down on their pricing a while ago so I was all ready to do that but then saw this HP ENVY ended up being half the cost for better hardware. Apple needs to put a 1GB graphics card in their Macbook Pros, 512MB on a $3,000 laptop is just pathetic. 

So reasoning aside, how was I going to setup this machine for web development, design, graphic design, and photography? Prior to this computer, I had dual boot Windows (Vista, ick) and Fedora Linux. Awesome, it ran fast it was simply to setup a webserver and it basically mirrored the servers I worked on. Photo editing was tough in Linux and forget Adobe (though there is a portable version of Photoshop that works); GIMP was the real tool. So I had to go back to Windows for any design work. Not that I do a whole lot these days and that's more because I've learned to deal w/o those nice programs and get by with GIMP (and GIMP is good don't get me wrong).

Now this time around I decided that since I have 4 cores (8 virtual) why not run virtualized machines? No more dual booting, no more hard drive partitioning, etc. So I setup Fedora 14 virtualized, and here's in my opinion, the very best setup for web development/design that you can get (and you can do this if you have a Mac too):

Base Host OS: Windows 7 (or OS X)

Virtualized Guest OS: Linux and Windows (or OS X, oops yea you can virtualize this, but its slow, that's ok though read down further why)

So you virtualize Linux so you can setup a web server that's practically identical to the ones you'll be uploading your files to for the live sites. Why setup a web server with Mac ports, homebrew, or MAMPP? That's silly. Why use XAMPP or setup IIS if using a Windows host? Oh god no IIS. OS X is better but at the same time, you just can't beat a Linux package manager for speed and ease (and grabbing all the libraries you need when you do need to compile something). Also, you really want to mirror as close as you can the actual server that your files will live on because then you'll run into less accidents. You know that everything will work, not "should" work. Rolling with updates is also much easier.

So the trick is you have to make sure your virtual machine has internet access and it's IP is static. With VMWare and Virtualbox you can set things to NAT and then within the guest OS (in Linux' settings for the network card) you can assign a manual IP address within the range of allowed addresses. I have mine at like or something. It won't conflict with any other machine on my LAN. It's not even in the same range.

You will want to probably disable SELinux since you aren't too concerned with security and it'll make things a lot nicer when it comes to shared folders. Make sure your firewall settings are letting Apache through as well. You'll want to setup a shared folder so your files are hosted on your host machine. So in my case Windows 7 holds my web site files. I have a different folder for each site and the Apache conf settings uses virtual hosts to assign a new domain to each site/folder. You can Google all of this. I don't want to make this a tutorial (unless I get some requests) but just a general idea for how to set things up. Pointing the Apache conf to the location for the shared folders. Awesome. Now you can edit files on the host OS with the editor of your choice, no problemo. Use git, svn, etc. The most important thing about this is that your files are now always available whether running the virtual machine or not and they are a bit better protected. Say your virtual machine gets corrupt somehow? Oh crap! Well, not anymore. Now it's just setting it up again worst case scenario, a few Apache config files lost. No big deal.

How do you access this webserver from the host OS? could setup a DNS server (like BIND) if you were nuts. Then set your host OS to look at that DNS before going out to your ISP. I personally didn't want to do that because I figured it would make things just ever so slightly slower. Then if Linux wasn't running, maybe even slower until things timed out before moving on to the next DNS. Instead, I alter my hosts file. In Linux this is easy because Gnome (and I'm sure other desktops) has a GUI to add entries to your hosts file. Very nice, but wait just one minute! There's an app for that! Windows and OS X both have 3rd party apps that will allow you to change up your hosts file in a nicer way. The one on Windows 7 that I'm using now is called HostsMan. It's actually quite cool and can provide some other handy features to help warn you about possible hijacks and such. Anyway, I just use it for my sites. So I have a lithium.local for example that points to the IP of the Linux virtual machine. Save that...Open up the web browser and bingo! Again, there's apps for OS X too and some that also work in your dashboard I believe, which is handy.

Yes, I do need to add a new setting (line) each time I setup a new site, but I have to also do that in Linux under the Apache configuration for the virtual host anyway. No big deal, it takes less than 5 minutes one time every time I setup a new site. I could figure some sort of automation script maybe or something through the web browser. You could setup a free control panel like you see on hosting providers...But I'm ok with typing in the config manually from a terminal.

Bonus! I don't need to use Putty (or Kitty) because I can use the terminal from Linux. Putting the virtual machine into "unity" mode, or "seamless" mode for Virtualbox users, will make this so much nicer. I basically now look at things and get confused...Am I running Windows? Or Linux? Nice, best of both worlds.

Now, virtualizing OS X isn't kosher. It's a no no. But...You can actually do it. I find that I can just test with Chrome and Firefox and be ok. I don't use OS X. There's also a Safari for Windows (though it's slightly different I hear). However, Firefox and Chrome are pretty good and I don't typically end up with OS X only CSS issues. Conversely, if you're doing this in OS X, you can virtaulize Windows XP or 7 or ME? Vista? No no, you'll be ok with XP or 7, haha. Now you have two virtual machines giving you ALL of the major operating systems. Why do this? Well, dual booting is a pain in the rear and it doesn't matter how fast your Windows or OS X guest runs because you're just going to use it for browser testing. Microsoft used to have images available just for this purpose. They were incomplete operating systems, but gave you just enough to test browsers. Personally I think it was genius, but also...They owed it to the world because of Internet Explorer.

So there you have it. My idea of the perfect setup for a web developer/designer. I can run all of the programs that I want to for design (since Linux isn't great for design of course) and also have my proper webserver with Linux. There's also a few helpful apps that Linux has that neither OS X or Windows has so now I won't miss out on those. I can test just about every web browser and make sure my site looks the same on all. The things we have to go through in order to make web sites. It's absurd...But at least we don't need 5 computers to do it. Running two operating systems full time is not a big deal either on any newer i5 or i7 processor. Also as an added bonus I can play Starcraft 2 :)

Yes, Apple fanboys will say they can do it all with just OS X...Well, no you can't actually. You can't test all the browsers without virtualizing and your web server is just going to be a joke running on OS X. I've setup several web servers before on just about every operating system short of Solaris. So believe me, you want the "real deal" you want something that's going to reflect the final live environment. You don't see hosting companies offering up OS X to run your web sites now do ya? Also, it's good to have your web server virtualized because it doesn't take a lot of resources to run and it will familiarize you with a Linux machine. That's important! It's also still just as convenient as setting up (through port/brew or compiling from scratch) a web server on OS X. In fact, it's more convenient. While MAMPP is probably the most convenient, it doesn't offer enough when it comes to all those PHP extensions. Now this is strictly speaking to PHP developers...Now how about you want to run other things on the server? Some things simply aren't available for Windows or OS X, and that's why you really want to virtualize Linux for your web development. Plus, you can have multiple machines and simulate/setup/test out load balancing and database clustering...But of course, to each their own.

Decompression: Discoveries and Current Projects

Posted by Tom on Tue, Oct 26 2010 19:35:00

So I want to make sure that I keep posting content on my blog. I not only want people to come back to my site, but I want to get in a good habit of writing as well as make sure that I'm jotting down some of my thoughts. The amount of crap that runs through my head is probably more than the average person's. That's not a pat on my own back, that's actually quite sad because I just want things to turn "off" sometimes. Sleep deprivation, lack of focus sometimes, and overall insanity is really what it leads to. Sticky notes and endless reams of printer paper, and sketchbooks (not that I sketch anymore like I should) really aren't cutting it. I need to decompress on my blog as well. So I'm adding some new categories to help stay organized.

Believe it or not, I use my own blog for reference. I do come back to what I wrote down and use it to copy and paste code snippets and keep tabs on where I was and where I am now. Sometimes I rant sure and those posts may not do any good for anyone...But I'm bored, and probably angry at the moment. Or, maybe I'm procrastinating. Like I am now...It's about the middle of the evening and I should be working on my little lightweight CMS but instead I'm writing...Hmm...Oh well.

So in the spirit of keeping it interesting for you all, check this out! Have an Android phone? Remotedroid can be found in the app market, but the server can be found on its Google code site. It's cross-platform, just run the jar file. Make sure that you have port 57110 open for UDP traffic (check computer firewall and router). 

You can hold down on the track pad area with one finger and swipe the other up and down and it should scroll. It may support some other gestures too, I haven't tried. It works really well as a mouse. The keyboard I found to be a little slow, but bearable for simple things like surfing the internet or perhaps your media center computer (ie. Boxee, etc.). That's really what I intend to use it for. Sure, there's a Boxee remote app for nearly all phones, but this works much better in my opinion...Especially if you want to do more or Boxee crashes or you don't run it all the time. Boxee seems to get a weird resolution change issue after my computer wakes up...So I have to exit it and restart it. Can't do that with the remote app. However, the remote app does have a novel picture of whatever is playing.

Anyway, a nice little discovery. What else? Well, just boring stuff. Things that I'm working on as I mentioned...A lightweight CMS. It's coming along well. It was the basis of the previous post about including external JavaScript files from another JavaScript file. This lightweight CMS doesn't use any framework...Or database even. It's intended for very basic, static, sites. Old sites. Imagine those sites for small businesses that were designed a while ago, or not necessarily a while ago, but perhaps designed very statically...This is quite common actually even in a world where we have Wordpress and Joomla and Drupal and Croogo! Don't forget that nice CMS. One day add Minerva to that too. Another project of mine for those of you who want to laugh, go ahead...But I promise that one will get finished as well.

I'm calling this lightweight CMS "Argos" with the metaphor/slogan of "Who says you can't teach an old dog new tricks?" So these very simple (1-20 page) sites are really the target. You can't use Argos on a dynamic site. Argos actually writes directly to the HTML or PHP page and alters the HTML contents of it. It of course stores data in JSON files to keep backups (also for historic rollbacks) and also backs up the entire site on installation (well, it will when I'm done). This ensures that the site using Argos doesn't get messed up and also helps to prevent user mistakes...Or rather, allows a user to "undo" things...Something I think that's missing from many CMS' out there. It's also designed to be quite compatible. Basically PHP 5 is the only real server requirement. It has to have the JSON PHP extension. That's the extent really of the requirements. Of course many shared hosts do offer MySQL but I figured I'd keep it as compatible as possible. Plus, do I really want to setup a database? What if I'm not installing the CMS? I want to deliver it with one PHP script. Let it download the files it needs and walk the user through installation. I'm assuming that they don't know what MySQL is and they certainly don't know how to setup a database and then a user to access said database.

The CMS does have a backend, but it's very simple for now. It does include a nifty file manager script that I found. So, there's another great find for you all...phpFileManager. It's just one PHP script actually, it works really well! I was thinking about expanding upon it and adding a few features and then just using it as a "swiss army" knife for web development. Adding things like markItUp! to it and so on...But keeping it all one file. It doesn't matter if it ends up being a few megabytes even...Just being able to get onto a server, wget the file from my server or FTP it somewhere, then load it up to go work on something in a pinch would be great.

Anyway, enjoy the discoveries and updates. I do post minor things like this on my Twitter feed by the way, so follow me!

Including External JavaScript From Another JavaScript File

Posted by Tom on Sun, Oct 24 2010 10:53:00

So I was looking around the internet for a way to include external JavaScript files in another JavaScript file. After a little bit of searching, the goal was clear...Use AJAX (XMLHttpRequest) to pull in the contents and then run evil on it. Err, I mean eval()... I definitely feel like it's wrong and against all the good rules we know, but it works and I think it's the only logical way to do it. Why on earth would one ever do it? Well, typically you wouldn't. Typically you'd just include your scripts on the HTML page...However, in a current project for a client I wanted to include one JavaScript file...Yet include jQuery and other scripts in that file. The reason being -- simplicity. It won't be me that will be including the JavaScript on the page(s) of the final site. So I don't want the (perhaps non-savvy) person to have to include all these scripts.

Reason number two. I don't want all this JavaScript loading for every person that hits these pages. It's actually a tool to maintain content on the site (let's just say a lightweight CMS). These tools are for editors only, despite that it won't do anything harmful if anonymous visitors hit the pages, it's still something that normal people don't need. It would technically slow down the page load time. However...Not if the embedded script is small and it selectively (based on "who" is loading the page) loads these external scripts. What if the site doesn't use jQuery? Why include jQuery and all this other stuff just so the client can edit the page? That sounds silly. Why is the JavaScript needed in the first place? Well, the CMS uses it for "edit in place" abilities and I have to assume the page is an HTML file so JavaScript would be the way to go. 

So eval() isn't quite evil in this case...It's actually our saving grace. Of course I could have made a "bookmarklet" for the editor to drag onto their page which would load up the editing tools. This would eliminate the need for any script to be included on the page and the normal visitor wouldn't be able to tell that there was any CMS powering the site (except for perhaps some subtle clues in the div tags)...But, I don't think bookmarklets are that great of a work flow. I don't think they are obvious enough for the non-savvy user. They are a neat trick sure enough, but it's an awkward process. They are a work around themselves...So what's the difference?

So then, how does one include external JavaScript files from another one? Like so:

function IncludeJavaScript(sURL) {
    var oRequest;
    // if using a normal browser
    if (window.XMLHttpRequest) {
        oRequest = new XMLHttpRequest();
    } else {
        if (window.ActiveXObject) {
            // if using IE 6 for some terrible reason
            oRequest  = new ActiveXObject('MSXML2.XMLHTTP.3.0');
    if (oRequest.status==200) {
    } else {
        // alert("Error executing XMLHttpRequest.");

// Now call the function, passing the script, here I'm including jQuery

Note that you will want the 3rd parameter of open() to be false so that the data loads synchronously. If it was asynchronously loaded, the rest of the code would execute and you'd probably have some undefined function errors and things not working. Another thing to note is that the external JavaScript MUST be on the same domain. JavaScript has a cross-domain security restriction. This is a good thing. You may be able to get around this though by having say a PHP "proxy" script on your server that grabs the contents of an external URL (the target JavaScript) and simply prints the results on the page.

Believe it or not, I found many incorrect answers to this problem out there so I hope reposting some of this research is helpful to someone else. I've tested this in Chrome and Firefox. I'm assuming it works in others too...Well, at least a modern version of IE.

PHP Frameworks and How Stuff is Built

Posted by Tom on Wed, Sep 29 2010 09:04:00

Been a while since I wrote a post, I've just been swamped. I've been working on a new (big) project at my day job and was away in NYC for a week and just crazy busy. This new project got me to thinking though. I really would like to compare frameworks. Ok ok that's a dead horse but I don't mean in terms of performance. I don't want to go download a bunch and say "see! see! that one is better! because it's faster on my inaccurate benchmark test!" No, instead I really want to outline how things are built using these frameworks. I've made the decision to kinda stick in one camp of thinking. I've chosen CakePHP and Lithium for very particular reasons... I like how these frameworks work and the direction they are going.

Is it better? I think so, but I can't say it will make your app any faster. Again, I'm not looking at performance. What am I looking at? Things like how easy it is to learn the framework, how many features it has ready to go out of the box, how fast you can build things, how easy is it to work with a team using these frameworks, and so on. This is very important and people always skip it. Honestly, it's saved the company I work for literally thousands and thousands of dollars in man hours and helped the developers there greatly in terms of their stress level and skill level. 

That said. What am I comparing? Well right now I can only really compare Lithium, CakePHP, and Symfony. I haven't really used any other frameworks in enough depth to compare. I understand some of their fundamentals and I know they exist, but without actually doing a project, it's hard to compare. That's what I also don't like about other comparisons you'll find out there. They just simply don't know how to use (or optimize) the framework so of course their benchmarks and opinions are going to be skewed!

CakePHP is perhaps the fastest framework I've seen (or heard of) to build web sites/apps with. You simply can't compete with it's "bake" feature and how much it provides for you out of the box and how many snippets and addons (helpers/components/behaviors) that you'll find for it.

Can it scale? Yes. Does it have the smallest "footprint" out there? No. I don't care about how it performs though because it can be used on large scale sites and it's very fast to develop with. It gets stuff done. Period. That's the basis of the entire PHP language. If we want to work toward some super efficient and fast performing application, we probably aren't going to use PHP. While PHP is become better and faster, it has always been the "get stuff done" language.

CakePHP has been the primary tool that I've used for several years now. The Croogo CMS (that I use on this site) is built on CakePHP and it's a very good CMS. It's flexible and since it's built on CakePHP, I immediately understand how to build plugins and work with the CMS. The idea of these "standards" and "convention" is what makes CakePHP a very solid framework. It also makes it easy to learn. The documentation for CakePHP has become very good. There were some somewhat valid arguments against the quality of documentation back in the CakePHP 1.1 days, but since 1.2 everything has blossomed. 

CakePHP's approach is to "hide" things from the developer. You are presented with many helpers and classes to help you get stuff done as well. There is also convention and by following it, you can build web apps extremely fast. You might call it a "black box" you just put stuff in and voila. However, you can override everything and expand upon things. You can extend classes and add (nearly) anything you need to. Learning more of the advanced practices with CakePHP will take time, but because it's a friendly framework you can start simple and overtime get more advanced. Perfect.

Symfony on the other hand is what you might call a "white box" framework. Everything is there and you have to do everything. Doctrine really helps automate things and the framework's use of YAML also automates things, but you still must configure it. So yes, some classes are generated for you automatically, but "how" you work with the framework is very much a known and open process. Making queries is only slightly different than CakePHP, but you have to type more. In general, there is a lot more actual coding that needs to be done with Symfony. 

So it's a configuration monster and there's a million classes, files, and folders all over. It's extremely hard to follow (thank God for "Go To Definition") and the API documentation is terrible. So don't expect any help there. It does have a few decent "books" to follow for learning the basics of the framework. 

This framework is very "exploded" for lack of better words. It starts you off with and kinda forces you to extend and abstract nearly everything. This is very good for very complex applications. This makes working in teams great because you probably won't bump heads often and it gives you a lot of flexibility. However, like I said that isn't to say you don't have the same flexibility with CakePHP...It just means you are starting off in a different direction.

My analogy is like both CakePHP and Symfony are PHP frameworks. Ok, so say a deck of cards. 52. CakePHP is that deck of cards in a box stacked and in order. Symfony is the same exact 52 cards in a bag scattered. You can play a game of cards or you can build a card house with both. The difference is if you want to easily find a single card, you're going to have an easier time with CakePHP. 

That isn't to say Symfony is disorganized, but it's just this giant piece of IKEA furniture that you need to put together. That's another great analogy. You look at the exploded view and you're like WTF? Then at the end of it, you have a ton of left over screws, but perhaps an awesome looking piece of furniture.

Configuration is nice but my personal feeling is that Symfony is overkill. Oh and using YAML when you can just use PHP arrays/objects is just silly.

Lithium, the new kid on the block. It's almost unfair to compare since it's built on PHP 5.3 and as a result can use namespaces. Namespaces are critical to a framework for organization and integration with other classes (and frameworks). However, Lithium has many of the same fundamentals that CakePHP has. Obviously no surprise if you know the people involved.

Again, I like this way of working. I like the way classes are named and how the code looks. I feel it's comfortable and efficient. However, Lithium is still young and there aren't nearly as many classes for it as there are for CakePHP. I wouldn't expect that to change too much either though because the goal of that framework is to not add a lot of bulk. It's also designed to work well with other frameworks. So you can use libraries from other frameworks...And this is where the "speed" comes back in. Since you will have more options of "what" you can use, the byproduct will be speed in terms of how quickly you can build something.

Lithium's documentation isn't the greatest of course because it's a young framework. However, it does have a very unique feature and that's the li3_docs plugin...That will basically go through your code and parse your comments to create API style documentation for you! Along with embedding code snippets. This is extremely important for working with a team of people. Also great for your own personal reference.

Lithium also has its own test suite built in. Another killer feature. It makes testing much easier because you don't have to setup a bunch of stuff. You're just immediately ready to go.

Lithium is very well organized, easy to follow it's classes and the API documentation is actually already good. It's far better than Symfony's API documentation. What Lithium doesn't have is a nice "book" that CakePHP and Symfony has...But give it time.

Perhaps the most unique feature Lithium has though that the other frameworks (and any that  are not PHP 5.3) won't have is the filter system. I've written a blog post about it before, but essentially it's the idea of "aspect oriented" programming. It allows you to "tap into" the chain of events from other areas of your application. Used responsibly, this helps you overcome some of the problems that come up when developing applications using these frameworks. We've all been there...You know, when you say, "Oh I wish I could just do this here, but I can't get at this or that" ... or the best one, "I could...but I don't want to 'hack' the core." 

So the winner in this round up? There is no winner. It's your preference. Again, I'm not trying to say which framework is better. My personal preference is Lithium right now because of the listed reasons above. It's also fast. Mad fast. That's important to me, but it's not important to a small site. You may want to use CakePHP to build a simple blog (or use Croogo which gives you most of what you need in 5 minutes) because it's simply faster to do the work and at the end of the day, you'll be able to keep up with traffic demands. 

If you were to benchmark the frameworks, of course Lithium will blow the others out of the water, but again it's unfair to perform that benchmark when Lithium runs on PHP 5.3. 

Symfony, ah...Symfony. I hate it. I'll be honest. However, it does have it's merits and value in a team environment. Is it better for working in teams than the other frameworks? No. We have revision control. Even without revision control (which you need regardless), it's still not "better." It's just different. If you prefer to work with a million little pieces and configure every tiny detail, then you will probably like Symfony. That doesn't make it "wrong" to use. It can scale and be used to do the same things that other frameworks do.

So at the end of the day, you have to look at it as a personal preference. However, I think there are definitely distinct "camps" for "how" we build things and my preference is in the CakePHP and Lithium camp. Obviously I'm busy, I'd love to learn different frameworks to compare those...But I won't have the time. So feel free to leave some comments on your experiences and preferences!

Posted in Web Development, php

Facebook Connect & Lithium

Posted by Tom on Tue, Aug 31 2010 14:25:00

It seems to be a week full of different authentication methods here. Last I wrote about an LDAP Auth class adapter for Lithium that I wrote and now I'm going to go over using Facebook Connect with your Lithium project. As it turns out, it's also very easy and you can put everything you need within a library to keep things modular.

First things first, you'll need the Facebook Connect PHP SDK....It also wouldn't hurt to grab the JavaScript SDK. You'll probably end up using it. Continuing here with "step 1" you're going to alter that single PHP SDK file. I know, I know...Hang on and listen. When we use Lithium, we really like namespaces. I really pray for the day that all PHP libraries we pick up are namespaced, but that just isn't a reality yet. Fortunately the Facebook PHP SDK is very small. It's just one file with a few classes. Cool! So within that you'll put up at the very top: namespace facebook; Then you'll also need to put a slash (\) in three places. That's where you see (also up top) new Exception. Ensure those instantiations read new \Exception... Also where the FacebookApiException extends Exception, make that FacebookApiException extends \Exception. Alternatively, up top put: namespace \Exception; and you should also be ok. So you're really only talking about 2-4 small tiny itsy bitsy changes. So you can relax.

I'd put this into a library, right? Why not? So within your app\libraries folder make a "facebook" folder and rename that "facebook.php" file you just downloaded and make it "Facebook.php" because it's just not going to work otherwise. Ok? All good? At this point you have a "app\libraries\facebook\Facebook.php" file that has a few tiny tweaks to use namespaces. Cool? Ok, on with step 2.

Now let's setup our authentication in a pretty common fashion. We're talking about applying filters during the framework bootstrap process. Create a "config" folder and add a "bootstrap.php" file, so you have a "app\libraries\facebook\config\bootstrap.php" file. Within here is where you'll be applying some filters to setup the authentication for your site. Now, if you've done this before, it's going to be fairly familiar. I'm going to paste an example that will provide you Facebook Connect authorization only. It's different than what I'm using because I'm allowing people to login using accounts in my local database or with Facebook Connect. I also create a new user record upon logging in with Facebook Connect. That way users can create their local profiles just the same whether they registered with my site or used Facebook Connect bypassing a registration process. Your needs are most likely going to differ. If not, then maybe I can be convinced to wrap up a complete "User & Auth" library. Part of this is going to look exactly like the example they provide along with the FB PHP SDK. So, here we go:

use \lithium\storage\Session;
use \lithium\security\Auth;
use \lithium\action\Dispatcher;
use \lithium\action\Response;
use \facebook\Facebook;
use \facebook\FacebookApiException;

     'default' => array('adapter' => 'Php')

Dispatcher::applyFilter('run', function($self, $params, $chain) {
     // Create our Application instance (replace this with your appId and secret).
     $facebook = new Facebook(array(
          'appId'  => 'XXXXXXXXXX',
          'secret' => 'XXXXXXXXXXXXXXXXXX',
          'cookie' => true,
     $session = $facebook->getSession();
     $me = null;
     // Session based API call.
     if ($session) {
          // Write the session
          Session::write('fb_session', $session);
          try {
               $uid = $facebook->getUser();
               $me = $facebook->api('/me');
          } catch (FacebookApiException $e) {
     // login or logout url will be needed depending on current user state.
     if ($me) {
	  // This will come in handy later
	  Session::write('fb_logout_url', $facebook->getLogoutUrl());
	  // So set the Auth and pass along (in the session) the data from FB API
	  Auth::set('user', $me);
     } else {
	  // Again, this will come in handy (unless you're using the JavaScript SDK)
          Session::write('fb_login_url', $facebook->getLoginUrl());
          // If no FB session, clear any local session we may have set
     // Here's one way of locking out different actions based on login status.
     $blacklist = array(

     $matches = in_array((string)$params['request']->url, $blacklist);
     if($matches && !Auth::check('user')) {
          return new Response(array('location' => '/users/login'));	 	
     return $chain->next($self, $params, $chain);

That's pretty much all you need (in the least) in your bootstrap process. Step three; we're going to place a Login and Logout button on the site. So we need to go to the view template of your choice (could be a user's controller's "login" action or could be elsewhere). Let's say it's the login action. Again, I recommend using the JavaScript SDK as well. I'm also going to assume you'd be using jQuery. Here's what our template might look like:

<?php $html->script('', array('inline' => false)); ?>
// initialize the library with the API key
     appId   : 'XXXXXXXXXXXXXXXX',
     session : <?php echo json_encode(\lithium\storage\Session::read('fb_session')); ?>,
     status  : true,
     cookie  : true,
     xfbml   : true

// fetch the status on load

$('#fb-connect-login').bind('click', function() {
     FB.login(handleSessionResponse, {perms:'read_stream,publish_stream'});

$('#logout').bind('click', function() {

$('#disconnect').bind('click', function() {
     FB.api({ method: 'Auth.revokeAuthorization' }, function(response) {

// no user, clear display
function clearDisplay() {

// handle a session response from any of the auth related calls
function handleSessionResponse(response) {
     // if we dont have a session, just hide the user info
     if (!response.session) {

     // if we have a session, query for the user's profile picture and name
          method: 'fql.query',
          query: 'SELECT first_name, last_name, pic_square FROM user WHERE uid=' + FB.getSession().uid
          }, function(response) {
                var user = response[0];
                $('#user-info').html('<img src="' + user.pic_square + '">' + user.first_name + ' ' + user.last_name).show('fast');
     <button id="login"></button>
     <button id="logout"Logout></button>
     <button id="disconnect"></button>
<div id="user-info" style="display: none;"></div>

That's just about all you'll need. \lithium\storage\Session::read('fb_session'); is going to hold all your session data from Facebook, but you'll probably want to be using the JavaScript SDK to actually do anything useful with that information since it will be limited. Don't forget that facebook limits the information that you are allowed to store locally within your database. You of course could also continue to use the Facebook PHP SDK all over your site to get this information.

If you don't want to use the JavaScript SDK, you'll have the login URL set in the session data from the filters you applied. You'll grab it from: \lithium\storage\Session::read('fb_login_url');

Of course you can get more complex...You can setup a bunch of ACL based on your needs and make users perform other final registration tasks or you can, like I did, combine this with your User's model and allow people to register and login without Facebook Connect or login with Facebook Connect as an alternative.

1 | 2 | 3 | 4 | 5