Archive for the ‘Lecture’ Category

Audio / Slides / Code Posted from my Drupal Camp NJ talk

Tuesday, February 26th, 2013

I recently gave a talk at the Drupal Camp NJ on Drupal Coding Patterns. You can listen to the entire talk, and view the slides and the code: http://www.drupalcampnj.org/content/drupal-coding-patterns.

Posted in Lecture | No Comments »

Getting Comfortable Working With Rails Core

Sunday, August 15th, 2010

This year’s RailsConf was truly one of the most impressive and inspiring development conferences I have ever been to. Bright-eyed and bewildered, I got to listen to lectures from people big and small in the community. Two individuals, in particular, stood out to me: Yehuda Katz and Rick Martinez. I listened to Yehuda’s keynote; the now Rails hero’s mentioned in his talk “I was just some guy who dove in, and here I am.” Rick Martinez of Flavorpill.com, who you may not know, is really just a guy who did dive in and was able to give a fascinating talk called “Hardcore Extending Rails 3.” The overall message? Again, “I just started digging around, and here’s what I came up with.”

As an aside, one of the questioneers at Rick’s talk was Yehuda commenting something to the affect of “this is what we wish more people would do, just dive in.”

I knew I had what it takes to dive in, it was just a matter of discipline and taking the time to learn the tools and get acclimated with the libraries. I walked into the conference feeling like a novice about to hear four days of lectures that were way over my head, and I left chomping at the bit waiting to start tearing to guts of Rails 3 wide open (which I began to do the entire trainride home).

For the past two months since then, I’ve been consuming more Ruby information then ever, determined to master the language and become a productive member of the Rails community. There have been frustrating times when I didn’t understand anything (and there will be more of that to come), but there has also been a wealth of knowledge that I have gained and have been able to apply immediately. For those of you who want to dive in, I’ve assembled this list of things I’ve done to get going and I think it could help you out, too. Pick and choose as you wish; these items may not all be for you, but if it’s on this list, it’s because I directly associate it with my growth.

  • Read The Ruby 1.9 Book

    If you want to work with Rails, you have got to learn Ruby. Ruby is filled with new concepts and techniques that you probably haven’t experienced in other languages. The more of this language you know, the more you will understand the design patterns that the Rails library uses, and the more power you can leverage in your own code. The internal architecture of the code is a lot to take in, you don’t want to be hung up on Ruby syntax at the same time. Learning Ruby will help you separate whether what you are looking at is confusing because of your lack of understanding of the Ruby code or the Rails library (looking at the Rails lib after learning Ruby better, the code is so much easier to follow). Read “The Ruby 1.9 Book,” also known as “The Pickaxe Book.”

  • Learn Ruby Metaprogramming

    Metaprogramming is the practice of writing code that writes code. Two weeks ago, I would have told you that Metaprogramming is something that the pros did because they’ve grown so bored with the boundaries of the language that they want to add to it. On the contrary, metaprogramming is a staple of working with Ruby and is a compliment of the language’s aesthetics. Learning about concepts such as Open Classes, Dynamic Dispatch and Methods, and beyond will help you understand how programmers wrangle Ruby to get it to work the way they want it to.

    Here are three great resources resource for learning Ruby metaprogramming:

    1. “The Ruby 1.9 Book” book has entire chapter on metaprogramming. I recommend reading this before jumping into…
    2. “Ruby Metaprogramming” by Paolo Perrotta. This book explains metaprogramming concepts in a fun, observational way, with an entire section on metaprogramming Rails.
    3. As a follow up, Yehuda wrote an article that helps clear up some common misconceptions about how to properly write libraries – “Metaprogramming In Ruby”

    By now you realize you’ve got a lot of reading to do, which is why I totally recommend that you…

  • Buy an iPad

    This may seem like one of those “you-got-to-be-kidding-me” kind of todos. But the truth is, I single-handedly attribute so much of my educational growth over the past month to me buying the iPad. Last year, I bought the Ruby 1.9 book and the PDF. The book sat on my shelf and I only casually browsed through it. The PDF was on my laptop, and I only fired it up a couple of times, and then get distracted by a client’s email, or a blog post, and ultimately didn’t read any of it. With the iPad, I take it everywhere: I read for 15 minutes at the laundry mat, 30 minutes at then pool, before I go to sleep, etc. In three weeks, I read over 450 pages of the Ruby 1.9 book and over 250 pages of the Metaprogramming book. Here is the key though: make it an educational device only. For me, that meant not loading on a Facebook app or any other distractions, I didn’t even configure my mail account or add music to it. It’s simply there for ebook reading, reading RSS feeds and Tweets from other developers, and writing blog posts. It’s a productivity machine!

  • Read the Rails source code

    While it may seem daunting at first, the current version of the Rails 3 library is one of the most elegant codebases to try and follow. Before taking the plunge into the codebase myself, I used to hear people say “look at the source” and I used to think “I am not at that level yet.” Trust me, you are, and the more comfortable you get with Ruby, the more you will learn how the Rails code plays into it’s strengths. Additionally, the library is filled with extremely helpful comments. For example, check out base.rb in ActiveRecord; don’t quote me on this, but I’m pretty sure for every line of code, there are two or three lines of comments.

  • Get Involved

    When it comes to getting involved with Rails, you couldn’t ask for a better community to help you out then the Rails community. There are people with all different levels of expertise, willing to lend a hand to help you achieve whatever you’re trying to accomplish.

    • Go to conferences and meet ups. Not only do you learn about emerging technologies, but you also get to network and make friends with other developers, as well as learn about other popular developers in the community. One key piece of advice: don’t be shy. Everyone is there for the same reason, and everyone I’ve gotten to meet in the Ruby community have been nothing but nice and helpful.
    • The Ruby on Rails IRC on freenode, #rubyonrails. Don’t feel shy; just ask your question and be patient. Feel free to hit me up in there, I’d be more than happy to help you out.
    • RailsBridge – RailsBridge is a group of people committed to helping you learn more about Rails. They can answer your questions in the #railsbridge IRC, they organize weekend bug mashes to help the Rails core team get through ticket issues in Lighthouse, and are really just a great group of people. I got to spend a good half hour chatting with Santiago Pastorino of WyeWorks at RailsConf, about RailsBridge and all the things they do, awesome stuff!
    • Read up on the work other developers are up to. Guys like Aaron Paterson, Ilya Gregorik, and Yehuda Katz will commonly blog or tweet about things they are working on. You can learn a lot by being a fly on the wall.

    The biggest piece of advice I can give you in terms of getting involved with the Rails codebase is: start. Any new library takes time to learn, and Rails is no exception. If you follow any or all of the tips I’ve listed, I think you’ll find the learning curve to be a little less steep. If you have any other advice that helped you out along to way, please leave a comment below, myself and others would appreciate it.

    Tags:
    Posted in Lecture | 4 Comments »

What’s in a deadline, really?

Thursday, February 4th, 2010

As I sit here in court, waiting patiently to fight a cell phone ticket that I am clearly guilty of, I got the opportunity to read over David Hansson’s recent post on deadlines. To recap, he stated that we should stop holding ourselves accountable to unrealistic deadlines, or better put, to deadlines that force us to sacrifice quality, testing or hours. This isn’t a new thought for any seasoned developer. While we dread the idea of slaving over late hours and patch-and-go design methodologies, we seem to constantly practice the former, and unfortunately the ladder.

While, I agree that we shouldn’t be pigeon-holing ourselves into tight deadlines, the truth is, most of the time, doing so will land you the contract, and refusing it will result in the client giving it to one of countless other developers who will gladly slave to meet that impossible deadline. So, how does one find a homeostasis where both the client and the developer are mutually happy with the quality of work and the amount of time it takes to build it?

For most development companies, three basic questions fuel the work we do, to a certain extent:

  • What do you need done?
  • How much can you can pay to get it done?
  • When do you need it done by?

Armed with these 3 answers, you can derive a general synopsis of the life-cycle of a project. For a certain type of client, the first two questions are givens: the client needs these features and they only have a certain amount budgeted to get it done. Clients generally come to you with specific demands that they typically have good reasons for (and obviously, we will intervene if our experience tells us the demands need tailoring) As for budget, that’s always a touchy subject: we are resolute in what our labor costs will be, but sometimes, clients won’t budge and you still want to help. As a result, time may be your ultimate leverage when you gain the knowledge of the different ways time can factor in to a project.

Why is your deadline?

Steve Walker and myself spend a great deal of time debating certain points of our industry in hopes of carving out a strategy we refer to as “Client-Driven Development.” The idea is to shift the paradigm of many of the assumed truths of developing software for clients. One of the facets that interests me is the concept of a deadline. To explain, here are a couple of examples.

The arbitrary deadline

Scenario: The last web company I worked for believed in the arbitrary deadline as a means to engage their employees. A new idea would be conceived by the CEO and its deadline was always “the last Friday of the month,” regardless of rational, time estimate to develop the feature, or what day of the month it currently is.

Result: The net result was a company divided by people that knew the deadline was bullshit so they lost passion for the project (and eventually the company) and people that worked feverishly to meet this deadlines because “he’s the boss.”

The open-ended deadline

Scenario: Many times, you will work with a client who “doesn’t have a deadline”. The reasons why they do not vary, but many times I feel the client thinks they are being convenient by being very open-ended.

Result: The result of offering an open-ended deadline can have two general consequences. The first is that the developer may afford the client the ability to either negotiate the features or the budget. The second is that the developer will hold you to their estimate, but the project will never get done. You all know about that project that you have right now that started over six months ago but either you haven’t finished it yet because there is no client pressure, or the client has finished providing the content or approvals, also because there is no client pressure.

Note: having an open deadline is like getting in shape without weighing yourself first: sure, you may tell yourself “the amount of weight doesn’t matter, just as long as I look good,” and you still might even loose the weight. However, knowing your starting point and agreeing on the end point is the catalyst for realistic motivation.

Event Inspired Deadline

Scenario: A very common situation is for a client to come to you and say they need a project launched by a certain date because of a certain upcoming event, where ‘event’ is paralleled by a significant date to the client’s demographic. Some examples are trade shows, monthly newsletters, cultural events (ex. launching a red version of a site by Valentine’s day), or in tandem with a new product release, to name a few.

Result: As a whole, I find I am more accepting of these deadlines because they have a black and white reason for their necessity. The issue developers have with these deadlines is that, often times, they are given without much time to develop. “We have a trade show in 2 days and we want the this new section to go live by then,” or even more painful “We have a trade show in 2 months and we want this new section to go live by then,” and then you don’t get the content for it until two days before the deadline.

Conclusion

My suggestion to you is not to try to treat the deadline game in a one-size-fits-all manner, because all clients and projects are different. It’s just not realistic to think that every client will agree to the one way you are going to address deadlines. I have found it much more effective to analyze what type of deadline the client has and see how I can use that as an advantage against the other main aspects of the project. Remember that while the three points of a project – features, budget, and time – may always form a triangle, there are different types of triangles based on how you handle each vertex.

Sidenote: For those curious, I got the ticket thrown out.

Joey Naylor: Dad, why is the American government the best government?
Nick Naylor: Because of our endless appeals system.

Tags:
Posted in Lecture | 1 Comment »

What would you like to see in project management software?

Saturday, December 5th, 2009

As web developers, whether you are a designer or a programmer, you have to manage tons of information to complete your project as quickly as possible, and with all of the requirements of the client in mind. It doesn’t matter if you are a large web shop, or the freelancer starting out with a small number or projects, you need to be organized. Organization of data, feedback, tasks, contact information, assets, and any other requirements ensures that you can maintain the big picture of the project, all bullet points, and that you have a paper trail of where direction and decisions came from.

However, it seems that though every size shop requires the same things, we can’t all seem to settle on the same software to organize our projects. The one-man show doesn’t want to pay $24 a month for a tool that deprives him of time-tracking and is deeply ingrained in multi-user collaborative features. The large company has problems with its oversights: it doesn’t account for maintenance or non-project work items, it doesn’t organize e-mail tickets, it doesn’t incorporate invoicing, and other random nuances that large shops have. Therefore, large shops are forced to have a number of subscriptions to varying SaaS apps, with redundant data, making an uneasy experience for the workers. This is no slight of BaseCamp, the extremely popular software is just one of many who offer powerful tools for managing projects.

I have a vision for software that manages the way your web shop works, and not just how you organize projects. My ultimate goal would be to develop a turn-key application that allows companies large a small to handle the life cycle of a client and their project. We are at an amazing time in web development, where information accessibility and frameworks are turning hobbyist into professionals. Larger shops are at a huge advantage too, with an infrastructure in place, and the business world acknowledging having a web presence like never before, your revenue is only limited by your turnaround times.

That’s why I am calling on you, the reader from all walks of life, to tell me about your day-to-day work process as a web developer, and how you vision an organizational application to handle what you do. Please leave a comment below, and I promise not only to keep the development of this application customer-driven, but to also to give you beta invites (so make sure to include your email address!)

Thanks in advance!

Current Items

  • Client Management
  • Project Management
  • Web Development Specific Roles: Project Manager, Developer, Freelancer
  • Invoicing
  • E-mail Maintenance Ticket Integration
  • Create Billable Services – such as Email, Hosting, SEO, Consulting, etc
  • Wiki / KnowledgeBase ( thanks Mikey Van )
  • API for accessing/submitting information

Posted in Lecture | 5 Comments »

A Response to An Event Apart Chicago 2009

Tuesday, October 20th, 2009

As I sit here at O’Hare Airport, eyes burning from the lack of sleep, I’m finally getting to wind down mentally and really absorb exactly what I’ve learned from the 2009 An Event Apart conference in Chicago. As a long time ALA reader, I was absolutely pumped to come out here for the first time and take in some of the cutting edge information straight from the horse’s mouth: a collection of industry heavyweights and unsung heroes alike. Put on by Eric Meyer, of noted CSS fame, and Jeff Zeldman, known for his contributions to web standards, the conferences have firmly established themselves as providing the de facto standard as to the quality and gusto, that certain wow-factor, that our imaginative and creative peers crave when making substantial investments of time and money into our careers.

The conference was split among two days; the collective genre of the speaker’s topics from the first day focused more on the business side of web development, while the second concentrated on code, developments strategies, and concepts to open our minds and tries to nudge the industry forward.

If I had to give the conference an overall grade, it would be a B. It’s a grade that leaves you thinking “I wish it was an A, but I’m glad it wasn’t a C”, and that’s pretty much my reaction. While A List Apart is normally known for introducing new and high-concept topics, the conference seemed to amplify sentiments from the past year. To say I was unsatisfied would be a lie, but to say I was blown away would be one too.

Note: all photos beautifully captured by John Morrison – © 2009 John Morrison – subism studios llc – link

The Speakers

Jeffrey Zeldman

Jeffrey Zeldman

The brain behind Happy Cog, Zeldman gave a talk about ensuring we are solving the problem that we were asked to. He emphasized the importance of research to help fight your case, gain trust in your client, and ensure that you are addressing the task at hand. Don’t forget about tried-and-true software design principals, such as scenarios, to show how the site you are developing will address that research. My favorite section of his lecture was speaking about how to “learn to translate” what the client has been asking for: this is not their native thought process, so we should not assume what they are saying (or complaining about, more appropriately) is really what they want. The culmination of his lecture was to ensure we are addressing “the user, the research, and the problem”. When doing some personal site work, don’t forget that you get bored of your layout way before the rest of the world does.

Final Grade: B – Very solid topic, but played a little too safe for me.

Jason Santa Maria

Jason Santa Maria

Jason is a guy who leads by example, whether you like his ideas or not. Santa Maria spoke about a point that most will agree with: even though we constantly think about “the big picture”, but realize much of the UX lives within the small details and nuisances that give a site it’s personality. For this reason, his first of three take-aways was to keep a sketchbook, a topic which he wrote about back in April (which was the main reason I started to keep a sketch book on me at all times), so that you are good to go whenever those ideas, big or little, come along. Topic number two was to incorporate a grid system, whether a pre-packaged one like 960, the up-and-coming Blueprint, or rolling your own. In addition, he suggests thinking horizontally as well as vertically; we constantly think about columns, but horizontal content chunks help to maintain consistency even if you vertical layout needs to change ( see Jason’s site, jasonsantamaria.com ). Lastly, point three was his take on how to use fonts, suggesting that you should try to stick with one serif font and one sans-serif font.

Final Grade: C – The sketchpad idea is very important and related well to “thinking small”, but the grid section didn’t seem as relevant, and the font ideas just seemed very opinionated.

Kristina Halvorson

Kristina Halvorson

An amazing take on what could have been a topic dowsed in buzzword fluff. The head of Brain Traffic, she spoke of how to better integrate content into the life-cycle of the projects, a method BT used to produce a range of satisfied customers from state universities to Target, alike. In addition to site maps and page tables, she suggests using a “content inventory” system to audit a pre-existing site to know exactly what you have out there for visitors to see ( a system which I have already used with a client, only 4 days after she informed me about it). The net result is building a stronger brand, better audience targeting, and more optimized content for search engines.

Final Grade: A – For me, Kristina optimized the level of new and credible knowledge that loyal ALA’ers are accustomed to.

Dan Brown

Dan Brown

Following in perfect suit, Dan Brown, author of the must-have Communicating Design, introduced “concept models” – a new approach for identifying content relations for a more potent information architecture. Similar to RDB mappings in idea, not only did he introduce the concept, but he walked through procedures of defining and developing the models, the common patterns to look for, and he backed it all up with real world examples.

Final Grade: A – Just like Halvorson: introduce your concept, explain it, show the details, give solid examples. Very direct idea, which was extremely valuable, and instantly usable.

Whitney Hess

Whitney Hess

True to her New York roots, Hess spoke about a sort-of guerrilla approach to developing a user experience. She walked us through some cases studies, including how Iridesco, creators of Harvest, hooked up their contact system directly to a G-mail account, and built a back-end feature request system, to build what the users want, and not what the development team thinks they want. Her main take away point was you can upgrade your UX by design research, web analytics, usability testing, and experimentation and iteration.

Final Grade: B – Hess not only helped dissect UX into very graspable components, but also gave the common man techniques to implement on their projects now.

Andy Clarke

Andy Clarke

What isn’t there to say about this upstanding gentleman? After suggesting that he was a secret agent in a past job, I then went on to listen to his lecture about “tearing down the walls” by designing in the browser. As a developer who masquerades himself as a designer, I am all for designing a site in the browser; after all, it is the medium we are addressing – the main selling point of Clarke’s lecture. Designing in the browser allows for more interactive mock-up reviews (rollovers, mouse* events, etc), faster edits to fonts, and it addresses browser anomaly issues up front, rather than after a client approves a lifeless mock-up. The technique is also extremely powerful for rolling out multiple template mock ups in a faster manner, as examplified in a case study for the New Internationalist (note: as of 10/19/2009, his design isn’t implemented).

Final Grade: B – I applaud Andy’s efforts on presenting a topic that antagonizes the recent photoshop-fluff site mantra that has emerged over the recent years, but find that his technique only stands for a certain category of clients that do not desire a “pixel-perfect” design. While I wish we would openly adopt his design technique, it’s going to be a while until we get there.

Eric Meyer

Eric Meyer

Meyer’s talk, “JavaScript Will Save Us All” was pretty much a recap of all the cool things that JavaScript can do for us, now that the engines have been getting much more powerful. He went on to talk about how JS is being used to level the playing field so Internet Explorer can hang with the big-boy browsers with scripts such as IE7-js, how it is the driving technology in tandem with flash for font substitution schemes such as sIFR, and a handful of other creative uses like interpreters and CSS extenders and the like.

Final Grade: C – To quote a tweet I posted immediately following his lecture “Am I the only one who feels like they were expecting more from this topic? Everyone seems kind of pumped …I kinda feel like ‘and…’”. I am a huge fan of Meyer, but a recap on the products of a technology that most professionals are already familiar with and use everyday is not what I was expecting from the industry giant.

Aaron Gustafson

Aaron Gustafson

Aaron presented eCSStender (pronounced “extender”), a JavaScript utility that allows all browser to implement properties that are not currently supported consistently, as well as creating your own brand of toolkit, in which you can create your own CSS properties. The script implements a hook system, in which unsupported properties turn to eCSStender to find a JavaScript function to fall back to. Written framework-agnostic, Gustafson showed very detailed examples of the features eCSStender has to offer, and how to implement in a couple of real world scenarios.

Final Grade: A – While conceptually I’m not on board with the idea, you have to applaud Aaron’s efforts at crafting a product that is not only a solution to browser anomolies, but also extends the capabilities with JavaScript. A unique idea, definitely in the ALA league.

Simon Willison

Simon Willison

Simon wrote about “building stuff fast and getting it approved”: his lecture outlined ways to develop products faster using new, agile-esk techniques. A core creator of Django, the python web framework, he mentioned the ability to utilize tools in your given development language to prototype faster: Firebug for CSS/JS, IRB for Ruby, BeanShell for Java, etc. He then spoke about the ability to use JSON with padding (JSONP) to quickly build apps using APIs from other sites. Additionally, he talked about YQL, Yahoo!’s content query system, “/dev/fort”, which was he and a group of his peers locking themselves in a fort in England to conceive, design, and develop a site from start to finish, hack days, screen scraping, open source, and git hub.

Final Grade: C – If you haven’t already gathered, his lecture was incoherent and erratic. Willison definitely seems like a guy with a lot of great insight, but he needs to really work on tightening it up to an obtainable stream of conscience.

Luke Wroblewski

Luke Wroblewski

An in depth evaluation of the psychology of forms, brought to you by the man who, literary, wrote the book on the topic. Luke took us through very well known examples of web forms, including PayPal, Amazon, and LinkedIn, to illustrate common pitfalls with form design. He presented us with findings of his research into how users respond to forms with vertical label alignments (better for ease of use) versus horizontal label alignment (better for updating). He used heat maps to display how visitors navigated through certain forms, gave a detailed analysis of how to use client side validation, and whimsically took us on a “who’s who” of screwed up web forms.

Final Grade: A – I may go out on a limb and say that this was my favorite lecture. Definite ALA material, Luke presented forms in a way I’ve never heard before. Also, he definately walked away with the quote of the event, “This is the web … fix it”

Dan Rubin

Dan Rubin

Rubin’s lecture on “Designing Virtual Realism” was a walk-through on the world of interface design, and how it relates to the web. His big takeaway point was that great design explains itself, and that a natural feeling interface will create the illusion that an application actually performs better.

Final Grade: C – Reminiscent of Simon Willison’s “buck shot” approach to lectures, it was sort of hard to follow. Half way through, his speech turned from creating realism to adding textures to your background. When it was all over, I’d be lying if I said I felt like I learned something.

Dan Cederholm

Dan Cederholm

Wrapping up the event was a very hands-on approach to demonstrating the power of CSS3, combined with that tactful approach of progressive enrichment. Through the use of his very elegant demo site, dowebsitesneedtobeexperiencedexactlythesameineverybrowser.com, Dan shows us how we can add little hints of new CSS properties, even though not all browsers will render these styles the same across the browser spectrum, by utilizing eight techniques, one of which is to start using RGBA.

Final Grade: B – Dan confidently walked us through a barrage of new CSS techniques, and alluded to a bunch of great techniques that I have no doubt we will start to see emerging as trends in the upcoming years.

Tags: ,
Posted in Lecture | 1 Comment »

Use Your Address Bar Like a Command Line

Monday, July 20th, 2009

There have been many debates back and forth between the development communities as to how your URLs should look. 99% of the time, you are going to find that it is fueled by whatever Mr. Google says to do. That is just fine for ramping up your SEO results, but what about for your applications?

While I am not about to offer any new discipline or technique that you haven’t already heard before, I may offer way for you to approach a very common URL technique that is new to you. The good folks over at the Symfony Project offer, what I feel, to be a very logical method of creating the routes to your web application.

How many times have you seen a url like this …

http://www.example.com/user/432

…and how many times have you tried to change that number to look up a different user?

That is the theory behind using your address bar like a command line. There is no harm in guessing what value should be in there. We build URLs intuitively, and they should be used intuitively. Most people from the Linux community will agree, if we don’t have to use our mouse, we won’t. So if I see I can see how an application is creating it’s URLs, I will use this to my advantage, and it’s foolish to think that your visitors won’t if they are a little tech-savvy.

Take a look at this quote from a tutorial directly from Symfony:

In a web context, a URL is the unique identifier of a web resource. When you go to a URL, you ask the browser to fetch a resource identified by that URL. So, as the URL is the interface between the website and the user, it must convey some meaningful information about the resource it references. But “traditional” URLs do not really describe the resource, they expose the internal structure of the application. The user does not care that your website is developed with the PHP language or that the job has a certain identifier in the database. Exposing the internal workings of your application is also quite bad as far as security is concerned: What if the user tries to guess the URL for resources he does not have access to? Sure, the developer must secure them the proper way, but you’d better hide sensitive information

Couldn’t agree more.

How To Implement

There is no magic potion here, so this will be more of a resource for people looking to learn by example.

Let’s say your application is for movie theaters, and has one functionality of fetching a theater listing by state. A logic approach would be to build your URLs as example.com/theater-list/[state abbreviation]. So if I live close to the border of New Jersey and New York (completely hypothetical, of couse) I can look up example.com/theater-list/nj to find all of the theaters in New Jersey. Then, when I decided I’d rather go to the Imax theather in New York, I can switch it to example.com/theater-list/ny . Sure there is a link on the page to select New York, but typing is just that much quicker –and that means improvements in “click”-thru’s.

Step one involves creating an httpd.conf (or .htaccess) directive to force any request to be routed to index.php.

1
2
3
4
5
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

Now when you try to fetch example.com/theater-list/nj, it will still load index.php, but you can parse out th request URI to call an apporiate file and load specific data.

Request URI = theater-list/nj
$Page_Identifier = theater-list (maybe use it in a switch … and let’s say, call theater-list.php)
$Data_Identifier = nj

So now inside of index.php

1
require_once ($page_indentifier); // which is a file that we indentified using "theater-list"

Then, inside of theater-list.php, do something like

1
$sql = "SELECT * FROM theater-list WHERE state = '". $Data_Identifier . "' // of course, you will cleanse this data

Posted in Lecture | No Comments »

Conficker is some new kind of crazy.

Wednesday, May 6th, 2009

I recently was given the opportunity to do some research on Conficker, the new worm that has been causing wide-spread panic in the Window user community (poor things).

Introduction
Having already affected around 2 million computers since November of 2008, Conficker is proving to be one of the most threatening strands of computer viruses to date. Targeting most major releases of the Microsoft operating system, Conficker has evolved into several different variants attacking Windows 2000, XP, Vista and Server. To date, it has reportedly infected such institutions as the French Navy, the United Kingdom Ministry of Defense, and the Federal Republic of Germany. This virus has been imposing so much fear into the Windows user base, that Microsoft and other anti-virus companies are offering a $250,000 reward for information that leads to the arrest of the entity responsible for creating Conficker.

The word Conficker is said to have two possible origins: it is either a mash-up of the word “configure” and “ficker”, the German word for “fucker”, or it is derived from the domain name trafficconverter.biz, which is a domain that the virus attempts to call and download additional binaries from. The virus, more specifically a worm, exploits a Microsoft security hole originally discovered by Chinese hackers, now addressed by Microsoft as MS08-67 in which an HTTP server is booted and allows remote procedure calls to run without authentication, and as of September 2008, reports of infections by Conficker started. The virus uses this RPC exploit to download a shell script to download a DLL file which runs as a network service to download malicious binaries. While the hole was patched by Microsoft by October 2008, the majority of Microsoft operating systems remain vulnerable due to user electing to opt out of security updates, whether by ignorance or, in the case of users with pirated operating systems, by fear.

Conficker is classified as a “botnet”, which is a virus that allows it’s creator to take control of the infected computer, and usually use it for malicious activity such as deploying spam e-mail. There are two main pillars that make Conficker so lethal. The first is the level of intelligence that was put into a number of the stages of the virus’ life cycle: it utilizes a very critical exploit of the operating system, it embeds itself deep within root-level access areas, it has a powerful algorithm for not only generating a list of Internet domains to locate updates released by Conficker developers but also for it’s public/private handshake authentication of newly downloaded binaries, and future generations of the virus utilized different exploit techniques and added to the scope of destruction of the virus. The second is that the Conficker virus “evolves”, moving into different classification based on noted characteristics of it’s behavior and tactics. What is even more startling is the analytical infrastructure that was built into the virus; the aforementioned domains have an API in place to allow the infected host computer to notify the domain about not itself, but how many other computers it has infected.

Types
When people speak of the Conficker virus, there are several different strands that they may be referring to. Each strand is defined by either the utilization of a different exploit, a new self defense tactic, or their very important end result.

Conficker.A:
Conficker A was first officially noted around November 21, 2008 and it is the original incarnation of the virus. This was the most basic breeds of Conficker, in which it utilized the MS08-067 exploit to download trafficconverter.biz. It also installs a shell script that generates a listing of 250 top-level domains that it will periodically poll to check if the virus’ creators have any updates to the software.

Conficker.B:
The next evolution was Conficker.B and it appeared around December 29, 2008. To date, this is considered to be most threatening of all variants due to it’s proactive as well as retroactive additions. This variant added two notable differences from it’s predecessor. First, it attempted to gain access to computers on the network via the Microsoft NetBIOS API by using dictionary attacks to gain access to the administration shares. Secondly, and possibly more importantly, this strand of Conficker implemented a scheme to attach itself to removable media, such as a USB drive, which would then install the virus on the computers that the drive connected to via the Windows AutoRun utility.

The virus also added in preventative measures, such as attempts to block Microsoft automatic updates, hoping to discourage any attempt of Microsoft to squelch the exploits.

Conficker.C:
The Conficker.C strand was the first to implement the regular top level domain calling. In the earlier half of March, it was noted that a small group of computers currently affected with Conficker.B actually started receiving binary updates from some of these domains. These updates upgraded the current virus to Conficker.C. This version increased the domain look up list to up to 50,000 and reverse engineering attempts hinted at a possible utilization of all gathered Conficker bonnets and the queried domains. In April, Conficker.C began a wave of botnet attacks in which it lured victims to fake Anti-Virus sites, getting the users to spend $50 on anti-virus software, which is actually the Conficker virus, and also stored all of their credit card information. The big scare of Conficker.C is the evidence of a Peer-to-Peer network the entire Conficker campaign developed and had begun to employee.

Conficker.D:
While noted as an individual type, the D strand is not acknowledged with the same merit as the others. It’ noted as a major deviation of Conficker.C as it implemented a prevention mechanism to disallow DNS lookups to know anti-malware websites.

Conficker.E:
Version E is the current version of Conficker that the world is watching in anticipation. It also has deep connections with the C variation with ties to Anti-Spyware virus sites. More impressive is it’s ability to recognize the B variant and, after confirmation, will update the virus to variant C over the aforementioned Peer-to-Peer network. The world is currently waiting in anticipation of Wednesday, May 5 when reverse-engineers at major Antivirus corporations have confirmed their belief that Conficker.E is set to self destruct.

Symptoms & damage
The Conficker virus is not very difficult to detect on a computer. In the first stages of the virus, the virus implants multiple virus based binary files different areas of the computer. The third stage of the virus shuts down many protection functions of the operating system and any antiviral software. It begins by disabling the Windows update system which includes the MS08-67 update that prevents against the specific exploit which the virus takes advantage of. Additionally, it disables the access of websites which are known anti virus companies use for their software updates.

Perhaps one of the most dangerous aspects of the Conficker virus is its ability to be easily modified and updated remotely. As it has been previously mentioned, there are four main mutations of the Conficker virus, each of which is steadily more harmful and dangerous. Any system with a less than updated version of the virus can be updated through a P2P-like protocol programmed into the virus itself. By this method, a Conficker version A or B can be updated to Conficker C or E. Conficker C and E will download malicious executables and phony anti-spyware software on to your computer which prompt you that you have a virus and ask for your credit card to buy antiviral software. This supposed software is actually the virus and your credit card information is stored.

The process which the virus follows is rather ingenious to fool both the operating system’s in place security and also any anti virus software that might be installed. The Conficker virus begins by checking the host computer for a firewall in place. If the infected computer has a firewall the virus opens a random logical port by sending a Universal Plug-and-Play (UPNP) call to the firewall. Through this open port it is able to download the rest of the virus as a binary stream. The virus has a public packet key which allows it to receive encrypted files from a self-updating random list of hosts which might possibly host the download for a malicious executable.

Prevention and protection
The best first step to protecting your computer is to update windows with Microsoft’s latest operating system updates. The Conficker virus utilizes a well known exploit in Microsoft Windows which Microsoft has acknowledged and has labeled the exploit MS08-067 and released a patch which directly addresses and fixes the exploit in October 2008. The exploit itself allows the possibility to install and execute malicious code remotely by sending it through the RPC (remote procedure call) server. This server is intended to be able to run programs remotely but it is supposed to be a secure protocol in which you must provide a validation to prove you aren’t an attacker. By installing the patch, it reevaluates the methods used for handling RPCs, effectively putting up a solid first line of defense against the virus breaking into your system. In addition to installing the latest Microsoft system updates, keeping a subscription to an antiviral software installed and fully updated is important to prevent and/or treat the Conficker virus. While these methods seem rather basic, the Conficker virus relies on the people who, for whatever reason, haven’t updated their systems completely. If you have a fully updated operating system and anti virus software, you will almost certainly detect and/or prevent the Conficker virus infection almost every time.

The virus propagates through the Internet and through local area networks (LANs). However, if a computer has all of the necessary updates, the virus creators implemented a clever workaround in case some computers on a LAN had prevented the exploit from being used. Conficker can implement a firewall facade, which makes the end-user believe they are protected. The firewall, however, will detect other strands of the virus which are trying to update the infected computer’s current version, and allow them to pass updates through the pseudo-firewall.

The virus also can spread through removable media such as flash memory drives, floppy disks or, CDs you burn form an infected computer. When any of these devices are put into a non-infected computer the windows AutoRun function automatically runs the .DLL file that the virus places on the removable device. In order to prevent the Conficker worm from gaining system access via removable devices, the Windows AutoRun function should be turned off.

The virus also implements a social engineering attack by adding a function which leads the user to believe they are opening folder to view files and even has the same icon as the normal windows option of the same name, however, when selected, this nefarious option executes and copies the virus to the hard drive.

http://isc.sans.org/diary.html?storyid=5695

Built into the virus is also a list of commonly used network passwords. Negligence on the part of an administrator or user in password creation allows the bug to easily bypass password prompts by entering in a password from its long list of extremely negligent common passwords. Known as a “dictionary attack”, passwords included in this list are things like strings of the same character, consecutive letters or numbers and very common phrases, such as “PASSWORD”. Setting any password as something this simple and easy to guess is a problem in general, but the Conficker virus uses simple passwords to its advantage. By creating a password that is a random string of characters, numbers and letters you effectively halt the Conficker virus from accessing your computer over a network and disallow it from using your administrator password to infect your files.

If the virus is already infecting your system, prevention is no longer an option and it is much more difficult to remove than before infection. The Conficker virus attaches binary files to many system .dll files including patching scvhost.dll, an important windows runtime library. Additionally, it prevents you from accessing the online databases required to update any software which would be able to stall the virus. While there does not seem to be a good way to repair all of the affected files, Microsoft advertises that its website based anti-virus scan and removal tool is able to isolate and repair affected systems which allows the owner of the system to re-access the Microsoft update site and update windows with the necessary patches. According to the press releases by Microsoft about the virus and the patch update, after installing and running the antivirus, and installing the update, Conficker is essentially neutralizes and no longer poses any threat.

Conclusion
While the world sits and wait in anticipation of what future variations of the virus will do, they are also worried about May 5th, when the E strand of the virus is set to expire. This same expectation occurred in early April, which only lead to more strands. One thing is certain, however: Windows users are more aware of the virus, they are more educated, and are taking more proactive measures to not only treat, but to prevent this virus. While the end of the virus’ life cycle is still unclear, it is plain to see that Conficker has made its impact as prevalent as the Melissa virus and the I Love You virus and will continue to be the main focus of anti-virus corporation and software engineers until the end of it’s reign.

SOURCES

http://en.wikipedia.org/wiki/Conficker

http://mtc.sri.com/Conficker/

https://forums2.symantec.com/t5/Malicious-Code/Connecting-The-Dots-Downadup-Conficker-Variants/ba-p/393517

http://groups.google.com/group/alt.comp.virus/browse_thread/thread/1ebc07beac72bda4

http://www.microsoft.com/protect/computer/viruses/worms/conficker.mspx

http://www.sophos.com/blogs/gc/g/2009/01/16/passwords-conficker-worm/

http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/the_downadup_codex_ed1.pdf

Thanks to Michael Skeffington for his contributions to the report.

Posted in Lecture | No Comments »

Symfony – The Switch We All Saw Coming

Thursday, April 23rd, 2009

Well, you’d all be lying if you said you didn’t see it coming. I gnaw your ear off in my last post about Doctrine, and how necessary Object-Oriented Design is in web application development, and how I haven’t found that “perfect framework” for me yet. Well, Symfony is definitely on it’s way to being my new framework of choice.

The reasons are obvious:

  • MVC Architecture
  • Doctrine Powered
  • Intuitive and Powerful
  • Awesome Documentation and Community Support



Now, I’d be lying if I said that there wasn’t a learning curve. For those of you already cool with MVC systems, then you’re only real task is learning how Symfony divides projects into applications and applications into modules, then it’s just a matter of picking up helpers and useful configurations, and your off. For those who are new to MVC architectures, I would strongly suggest reading up on it first. This is not so much necessary for Symfony, but a good idea in general.

I remember hearing about Rails, and how awesome it is because it’s very powerful and it utilizes cutting edge, sophisticated patterns and systems. While this is wonderful, if you aren’t keen to these things (MVC, Active Record, Database Abstraction Layers, and for PHPers… Ruby), then you are going to get hit with a lot and probably wind up saying “Rails is too confusing”, like I did. But then, with about a good year’s worth of studying in Computer Science and Software Engineer (if only cool Frank could hear me now), I’ve learned of such technologies independently, became comfortable with the concepts, and even implemented these assets in my own projects.

So fast-forward to today, when I feel that I have a pretty good grasp on these concepts and ideologies, when you discover a framework that pulls together all of those elements that you feel strongly about. It’s definitely a no-brainier for me to go the Symfony route. This, however, is not to say that everyone should go run out and use Symfony if you are into the aforementioned details. A framework strongly relies on the user, you need to click just like you would with your spouse (hopefully, that’s a good kind of “click”). A similar thing happened with me a JavaScript framework. Now I hated writing straight JavaScript a long while back, and I was desperately seeking a JavaScript framework to help me enjoy it so I can make those rich, web interfaces I kept seeing. So I grabbed Prototype, and I did everything in my power to learn it. But it just wasn’t “clicking”. I stopped all my Prototype efforts and, all in all, left JavaScript alone for a while. Until I found jQuery, and now I swear by it, I’ve contributed to it, it made me love straight JavaScript, and it actually got me into writing more JavaScript without a library.

That is the whole point of a framework, after all: to speed up your development. Symfony is doing that for me in a big way, but thanks in part to my past years of software engineering research. I say it all the time, “Programming is organic, regurgitating isn’t”. I found a great framework that works for me, but who knows, maybe it will work for you too.

I’d like to thank gnat42 from #doctrine, you we’re right as always … and you’re allowed to speak now.

Tags: , ,
Posted in Lecture | No Comments »

Doctrine – Build DB Models faster with less headaches.

Monday, April 13th, 2009

If you aren’t already hip to the idea of Active Record, it is the concept of tightly coupling records in a database to an object in whatever language you are working with. This allows you to conceptualize and produce your application in a very similar manor. For example, when you want to make a Blog, you think about A Blog, Comments, Categories, and Tags (to name a few). Object Oriented Design teaches us to think of these as object, requiring you to give them attributes and methods, while UML takes it a step further and defines a system for engineering the software with OOD – linking the objects together to form cohesive functionalities.

Since beginning my masters in Computer Science two years ago, I’ve been on a constant quest for a framework that best suites my needs: fast production, systematic, object oriented, and scalable. Some front-runners may scream “Rails!”, which I tried – but lost interest, mainly because I spent more time trying to play by DHH’s rules then actually developing. (Let me be very clear, I am not knocking Rails or DDH, the framework is brilliant and the community and support is booming, but it just wasn’t for me … maybe in the future.) So I searched for PHP frameworks: CakePHP, Zend, and Symfony were all overkill for me, a lot of bloat that I didn’t need (once again, not “unnecessary”, just “I didn’t need” … the aformentioned PHP framework are also awesome and I applaud them all.) It wasn’t long until I realized that I am only going to be happy if I make my own, and so I did.

Now, any developer worth his salt has an arsenal of code that he reuses, that’s a given. I’m no different: I’ve developed a library specifically suited for my needs consisting of mainly classes, common and layout functions, and random configurations. One key feature that they all rely on is my MySQL and DB class. Together, they act as a sort of abstraction layer to a database. For example, if I did

1
 $user->create($data)

it would loop through the data, clean it as needed, create and INSERT statement, and return me the results. It worked okay, but it wasn’t crazy flexible. Joins were nearly impossible, so I had to flank my own system by writing straight SQL statements. That was okay, though, because my queries we’re normally very straight forward due to the domains of the projects I’ve worked on. That was, however, until further studying into software engineering opened my eyes to an ugly issue that I find in my code, and in many others.

The issue is as follows: OOD says an object is to be fine-tuned so that it can handle itself the best that it can, it doesn’t know anything about any other object, unless you tell it to. If you need more access, you need more objects. Relational databases, on the other hand, want you to fine-tune your queries so that one statement can grab you all the info you need. So to recap, OOD wants multiple accesses to objects of limited information, and RDBs want a single access to an object of unlimited information. Two necessary components of your app with two differing ideologies.

With a little thought, it becomes apartent that while multiple hits to he DB is really bad, breaking OOD patterns is just wrong. So while researching how to develop an Object Relational Mapper to keep your code OO while keeping effecient DB access in mind, the good folks in #php kept referring me to Doctrine.

Doctrine is an ORM built on top of a database abstraction layer. After extremely thourough investingation, it proves to be an invaluable resource neccesary for any OOD trying to build the most effecient application possible. Doctrine comes complete with a powerful active record class system with an eligent, fully documented API, a database scheme manager, a suite of command-line utilities for various functionalities such as auto-generating models and database tables, a built in fixture system for auto-populating test data, integration for the development of custom test suites, and yes … much more.

Now while these features alone are enough to sell even the most skeptical buyer, what sold me was it’s extremely small footprint and seamless integration with my custom framework. Truth be told, Doctrine took about 20 minutes of reading over the guides to get comfortable with, about 10 minutes to integrate with my existing framework, and now saves hours of code writing. Utilizing PHP5’s object daisy chaining, you can link Doctrine’s lightweight record methods to do in one line what previously took close to 100.

For example:

1
$user = Doctrine::getTable("user")->findByEmail("test@email.com")->toArray();

It doesn’t get much easier than that. Backed by not only a very intuitive API, a well versed documentation, and #doctrine, one of the most helpful and curtious IRC’s I’ve ever worked with. With the development of version 2 in the works, Doctrine is on their way from a being a 3 year old ORM mapper, to becoming an absolute nessecity for any web application powered by PHP.

Tags: , ,
Posted in Lecture | No Comments »