Saturday, April 18, 2009

Interns and How To Use Them (or Not)

My last couple of posts have had to with your shops body of expertise, especially as expressed in terms of a learning organization and R&D. Although I see interns as a natural extension to such efforts, I realize the concept might take a little explaining when it comes to many.

Internship programs are a little like salads; even the bad ones have something good in them. However, it is sad how many businesses miss out on the great asset a properly structured internship program can be.

The mistake an overwhelming amount of businesses make is to assume that internship programs are mere PR exercises, imposed by HR on every department on a whim from management that only translates into a lot more coffee being brewed.

As a result no effort goes into interviewing or recruiting the right kind of interns. Interns are merely tolerated into boredom, especially in IT — where we all get our own coffee, thank you— and and nobody benefits in particular from the program.

Such a routine repeats itself all too often while valuable talent and a research tool are wasted. Ironically even under such undesirable circumstances the company may gain some good will in the community and some potential new talent might get discovered. But so much more can be gained.

Here are some suggestions on how IT can insure it gets the right kind of interns and utilize them in the best possible way. These should make interns welcome additions to your IT crew.

(1) Be Selective
This is very important. You really want to make sure you get the kind of interns that will be great members of your team, even if temporarily so.

If possible, interview the prospects your department will be taking on. If this is not possible, have them write a short essay on what they expect to get or learn from an internship. The idea is you want to make sure there is some connection between what you are planning and what the interns expect to be doing.

I should be careful to point out that your best interns may not necessarily be Computer Science majors. Sometimes majors in other disciplines have the right aptitudes and attitude to be good IT interns.


(2) Plan Far Ahead
Interns are best suited to long-shot tasks that would be too time consuming for regular staff. Be careful though to make sure they lend themselves to easy ramp-up. Also, make sure you have clear goals and proper metrics in place. This not only boots intern morale but helps in determining how successful the program is.

Do not plan on improvising. Have a long list of items on the ready. Nothing encourages commitment and seriousness like a well planned to-do list.


(3) Be Flexible (and realistic)
While many managers are accustomed to getting eight or more hours out of their regular employees, such expectations are unrealistic when it comes to interns. If an intern's agreement specifies eight hours expect no more than seven and a half. Sme interns may have signed up for half a day or even less.

Be strict about quitting time. Many interns leave to go to jobs or classes and may not be bold enough to insist on being released on time. Getting penalized at their next appointment may dampen their enthusiasm.

Whether they work for a small stipend or for free, let them know they are appreciated. A small expense that goes a long way with interns (who are mostly broke college students) is food.

Keep the kitchen stocked with snacks and drinks and give them a free lunch without fail — with many interns, this might be their only real meal. You would be surprised by how much can be accomplished with such a small and relatively inexpensive gesture.


(4) Sandbox and then Sandbox Some More
I remember a certain large corporation, which shall go unnamed, that was brought to its knees for almost two days after an intern accidentally wiped out one of its key databases. This should easily lead us to rule number 1 : do not ever expose interns to production systems, clients or sensitive data.

Remember that, unlike your employees who have signed legally binding NDAs and are covered by your firms professional liability insurance and other legal mechanisms, interns are merely expected to behave themselves within reason. So, sandbox them.

Interns should never have access to customer data. They should never work on any mission-critical system or application. They should never be allowed to sign off on or access any code after it has been signed-off on. They should not participate in any release or communicate directly with clients on behalf of the company. Your legal department will be happy to fill you in on reasons; I will just summarize by saying that you will regret it very much if you don't take these precautions.


So what can interns safely and reliably do? Well here is a list types of projects that are not only sandbox-safe but are challenging and fun enough to be motivating to both interns and engineers:
Proofs of Concept

This type of project is among the favorites of programming enthusiasts and is probably one of the best ways to get quality R&D work done practically for free.

Code Libraries

This is safe way in which to get interns to contribute to the shop's products and it gives them something to take pride in.

QA and Testing

With the proper precautions and done right, testing can be fun for your interns and extremely profitable for your shop. Some of the less demanding type of testing, such UAT, is ideal for interns whose technical skills are limited.

Requirements Gathering

This is another task that many interns will enjoy and that will leave the company with a useful outcome.

Documentation

A task that is always left until "later", creating or editing help files and other documentation for software is a task very suitable for interns.



(5) Plan For the Future
As you get ready to sign the evaluation sheets for your interns, plan for the next year.

Mark the interns you would like to have back the following summer. Likewise take not of the ones you would rather not have return. But most importantly, use the experience to compile a list of the qualities that you appreciate in an intern; this will help you in recruiting your next batch.

As it with every asset at your disposal, an internship program can be another way you can gain an edge on your competition. In the spirit of carpe diem, I encourage you to seize the program this year and next.

Wednesday, March 25, 2009

Why You Need R&D And How To Get It For Less

If your development shop is to improve in technology and quality while outpacing the competition, you need some amount of research and development (R&D). This is especially crucial in shops seeking to expand their intellectual property portfolio.

However, the word on the street is that R&D is only for very large corporations such as IBM or Microsoft with billions to spare. I used to feel that way too, but I was pleasantly surprised to learn from the best in the business that there was a cheaper alternative.

It turns out that Google, as in so many other areas, has blazed that trail. Not by inventing the concept, but by proving that it could be done. The idea of giving developers a percentage of their bench time to pursue whatever idea they fancy had been floating around for a good long while but had been flatly rejected by others.

I recall a certain project manager who would say "we all want to have fun, but we got bills to pay." This appears to be the rationale in most development shops, especially if there are budgetary constraints. However, I would like to propose a way to get around this hurdle fairly painlessly.

The first step is to improve in the time management department. It is surprising how poorly time is managed many development shops. For example, although there are long days at crunch time, between patches and releases, time killing is pointlessly common or filled with meetings of dubious utility. Methodologies such as Agile have resolved this imbalance somewhat, but we need to go even further.

Developer bench time can be optimized painlessly if done gradually. I would suggest beginning with a 5% allocation for personal projects. This is the equivalent to two one-hour meetings in a 40-hour week, barely felt but significant enough to establish the practice and explain the principle.

The key is to make sure this is not seen by developers as a break or social time. In fact, I would suggest two rules to make sure what needs to happen, happens during this special time:
(1) No web surfing except for brief visits to coding sites
(2) Heads down coding throughout — no pressure on the quality or the amount though.

Do not expect much to happen the first couple of weeks except for some habituation. However, if this schedule is kept religiously and developers are asked to speak briefly about their "personal projects" during the warm-up for regularly scheduled meetings, expect to start seeing some incredible code being generated by week number three.

The beauty of this practice, is that you will start seeing improvement in the quality of the production code and even an increase in collaboration among the developers.

As you start reaping the benefits of this program and your team's productivity grows —by virtue of their becoming better developers and by virtue of some of their personal project code finding its way into released software— it will be easier to sell to management the idea of expanding personal project time to a healthy 20%.

But, be patient enough to do so gradually. It is okay if it takes two years to go from 5% to 20%. The practice will yield fruits as long as it is done properly.

So there you have it: a very simple way in which your shop can benefit from R&D without breaking the bank. You are welcome.

Tuesday, March 24, 2009

The Learning Organization And Why You Need One

Many people in management believe a learning organization is one of those things that are "nice to have" and will be gotten around to if there is time or money available.

Worse yet, many think the phrase represents just another useless fad and want nothing to do with such a concept.

I will assume the worse and try to sell you on the idea before telling you how to get there.

If the sum total of your company's expertise is no more than than that of a single employee —even if that employee is the genius founder— then your organization doesn't have much of a future. The world becomes increasingly complex as time advances and your firm needs to keep up.

So, what are good examples of learning organizations? Google, HP, Apple, Walmart, Microsoft are just a few of the companies that in one way or another may be characterized as learning organizations. Their balance sheets and market positioning attest to the value of the knowledge based shop. But enough of the preliminaries. Lets begin by defining a learning organization.

The beginning and end of a learning organization is the realization that knowledge is valuable asset that is as worthy of safeguarding as any other capital good. In fact, it is fair to say that knowledge is probably the most important asset, and I am not just talking about intellectual property or patents, which is more of a legal construct.

Having established that the valuation of organizational knowledge is the mark of a learning organization and the secret of its success, let us examine the ways you can make your organization a learning one. After much thought I realized that five action words pretty much sum up the attitudes of knowledge shop:
1) Document
2) Train
3) Audit
4) Preserve
5) Transition

Document
I have to be careful when I say "document" because many companies bury their employees in red tape and procedural straight jackets in the name of "documentation."

No, no, no! In fact, hell, no!

Documentation does not mean filling out forms in triplicate or writing thick hundred page reports with useless details that nobody will ever read. This is is actually a productivity killer.

The documentation that works is the kind that fits easily into the workflow and is not onerous. In fact the best way to get your developers documenting their work is to give them a say in which is the best way to document their work.

A easy exercise is to have developers perform a specific task which they document the best way they can think of. Then have them pass on their notes to another developer who is supposed to replicate the task using only the written instructions supplied by the first developer. Then ask the second developer to write suggestions to the first developer as to how he/she could improve the original set of instructions he or she got.

The best thing about this exercise is that after discussing the results you can put together a documentation guide that everyone will be motivated to use because they are sold on its usefulness. After that, it will be a trivial matter to decide what software to use if none has been chosen yet.

Train
In a true learning organization, all members should either train or be trained at least once every week. This should be written into the contract or the SOP.

Of course, this requirement should be flexible in every aspect. All forms of interactive knowledge transfer should be accommodated : group sessions, learning lunch pair ups, one-on-one cubicle tutorials, conference calls, any which way as long as it allows for note taking and questions.

Although blogs, wikis, instant messages and even tweets are good, they should not be allowed to take the place of actual interactive training.

Also, some amount of learning verification or testing should be worked into the process. This need not be too formal or extensive, but there needs to be a way to gauge what forms of training work best and which don't.

Audit
This is probably one of the most important and yet complex aspects of keeping an organization ahead in the age of information. In the NFL (National Football League) they call it "studying the game film."

Every release, major patch, platform change or post-deployment emergency should be audited internally. It is not only important to have a post-mortem when things go bad; there should be one when they go well too.

Retrospective analysis needs to become part of the workflow. This not only insures improved performance and better quality products; it also helps to solidify winning practices and avoid repeat failures. Think of this as insurance.

Retrospectives will not hold back the team if they are properly planned. Also, if one of the goals of the audit is to make the next release less painful, and the promise is delivered on, the team as a whole will gladly rally behind such reviews.


Preserve
If you have a winning horse, you insure it and feed it the best purina has to offer. You need to preserve proven practices and technologies that work for your team. If these are proprietary and have obvious commercial value, your legal department will probably make sure you do.

However, anything that can help your team and others in the future or around the organization should be preserved and/or disseminated. One way to do this is through a developer intranet knowledge base or wiki.

Care must be taken, however, to provide a proper way of cataloguing or searching this information. Also, it might be a good idea to separate or differentiate preliminary notes from actual finalized data.

Another very important way to preserve knowledge is through enterprise code libraries and APIs. However, no library or API should be allowed be catalogued without proper documentation and source code — this is extremely important especially in these litigious times.

Transition
A learning organization is dynamic and forward looking. This means that it is important to be always planning for the next stage or technology. Although I don't suggest that every new technological fad be followed, I don't think it is healthy for a shop to keep old code around for so long that it becomes too difficult or impractical to support — remember all that old code that needed updating when the year 2000 rolled around?

Be cutting edge, but not bleeding edge. Whenever possible, use the latest stable version of any technology you choose and plan for transition far in advance. Most importantly, go at a pace that suits your organizational needs not, your vendor's sales schedule.

Speaking of transition, technology moves are easier when design objectives are formulated in terms of functionality needed as opposed to technology to be used.

For example, instead of having as a requirement that says "the SAX API should be used for this task ", your stated objective should read something like "a serial access XML parser that is efficient with large data streams or documents should be used." If you do mention a specific implementation, do so parenthetically (as an example).

A learning organization stays ahead because it looks ahead without forgetting the lessons learned along the way. It harnesses (documents) and multiplies (trains) its experiential knowledge. It gets better with time because it maintains a proper feedback loop (audits). It performs optimally because it knows its strengths. It keeps around and sharpens (preserves) its best tools in order to reuse them for even bigger kills. And, lastly, it knows when to change horses (transfers).

Sunday, March 22, 2009

IT Spending In Hard Times

Many firms are seeing hard times as certain sectors of the economy have been shrinking over the past few months. This is, of course, not a desirable condition. However, I would like to posit that a lot of good could come from the belt-tightening measures many an IT department will have to go through.

Actually, I should probably begin by stating that there is good belt-tightening and there is bad. Companies that engage in the latter my find themselves in an even worse crisis or, worse yet, may even go out of business.

As IT people, we usually have less control over how much of the company's budget is allocated to our department than how we will allocate the funds we do end up with.

Let's examine different key items on an IT budget and see how these can be kept up to par, if not improved during lean times.

Hardware
It is often assumed that if hardware is not the absolute latest (1) performance will suffer and (2) employee morale will go down. While we should not fall so far behind as to isolate the company technologically, hardware purchases should be justifiable in relation to the job requirement of the employee in question.

For example, studies have show that developers and designers improve their performance when using dual monitors such allocation for these types of workers is justified. However, executives and administrative assistants have no use for such extra spending so IT can save by simply letting them have a single, reasonably sized monitor.

By the same token, there might be other items, such as Blackberries and corporate cell phone accounts, that many developers really have no need for and just represent extra spending — exceptions must be made of course for product engineers who might need to be on call.

Software
This is indeed a great time to look into free tools and open source software. I am not advocating necessarily a wholesale conversion, just that areas of potential savings be identified.

Another, very important area to address is duplication. Many companies, for example, will spend money on site licenses for software they already have in another form. A good example which I have noticed at companies who've used my services is compression software. I have found it ridiculous that a company would spend money to deploy an operating system that has built in compression support and still spend money on a third party compression utility.

Projects
Most companies are good at killing projects that are deemed unprofitable or useless. However, many companies support parallel efforts at great expense.

Sometimes, it is just a matter of ignorance; at other times, it has to do with inter-departmental rivalry. Either way, it is wasteful for there to be duplicated development efforts within an enterprise. Any effort to curtail this kind of thing is worth the expense.

I would suggest a corporate version of SourceForge where all enterprise applications are catalogued and managers can search by keyword or description for applications that they need.

Division of Labor
There is a lot that can be said about this. I will try to be brief. Many companies, respond to crises by either laying-off developers and admins or by getting rid of contractors. While, there are good arguments to be made for either move in individual cases, as fixed policies both are horrendous ideas.

Let's first examine the basics. No matter how good times ever were, the number of full-time developers or network administrators should have been kept to a minimum. There are two reasons for this: (1) we do not want people who have so little to do they get bored and (2) software engineers are the kind of employee you want to have for a very long time as their value to the company increases exponentially as their seniority accrues.

Contractors should be hired to fill temporary gaps in personnel or to supplement the core team in large projects. The decision to hire contractors should always be based on potential saving and should be in no way connected to whether there is a lot of IT dollars to waste.

In other words, if you had to either cut contracts or lay off developers due to the recent economic downturn, you were probably doing something wrong all along,

Amenities
These tend to vary a lot and it's hard to tell a firm that they should remain constant. However, I believe that, if the amenities in question were properly analyzed or justified initially, they would be less likely to be subject to natural variations in the IT budget.

Another thing to be careful of is to mistake necessities for amenities. Office supplies, lights, proper seating and desk space are not amenities they are job requirements — ask OSHA.

Coffee machines, food concessions, free snacks, cable TV in the break room, gym memberships, etc, are actual amenities.

There is no easy way to dispense with freebies employees have grown accustomed to. This is why I always recommend that companies offer just a few choice amenities which they are likely to be able to maintain regardless of the vagaries of the economy.

Another tack would to have employees vote on what amenities they wish to keep and which they are willing to part with. This would help to keep morale high.

Efficiency
These tough economic times are the best encouragement any IT department can have to streamline its operations. If something can be achieved in less steps, reduce the number of steps.

Minimize requirements, simplify procedures, consolidate functions, look into virtualization if you haven't before, do not underestimate hardware recycling or reuse, try not to reinvent the wheel every time, cut costs wisely and where it is least felt or less likely to be seen.

This current crisis is not the first one we have faced and is not likely to be the last. It can be an opportunity to shape our organizations for a magnificent resurgence.

Tuesday, March 17, 2009

Why We Fight

This phrase has become very popular over the past few years in connection with the Iraq war. I however would like to turn this question on developers : why do we code? What is our job?

Before you dismiss the question as simplistic, or irrelevant, or both, think about it. What is your deliverable at the end of your project?


If you want to build a ship, don't drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.

—Antoine de Saint-Exupery



Lack of clarity in this respect has ruined many a software engineering job. Of course, I am not trying to say that any outfit can afford to dispense with specialization or division of labor. In a large project no one person can execute every aspect of the job at ever stage of the development cycle.

But, does this mean that we can afford to disengage? Not at all. On the contrary, we should be all the more plugged into the process and and should have the goal clear in our minds.

To truly contribute to a project, developers need to be flexible and open-minded. This might mean enduring temporary discomfort to achieve the greater joy of a solid product—: and, no, I am not talking about pulling all nighters although I do not condemn the practice per se.

The engagement I recommend is a commitment to the final product which would transcend pet peeves and preferences, a willingness to learn, adapt, go around obstacles and eschew pettiness. Given the large egos we IT people have, I know this is not easy; but we will have to if we are to remain relevant.

Sunday, March 15, 2009

The Beach On Which Many Come To Die

Reading through Slashdot messages recently I was reminded of the many vulnerabilities web-facing applications are prone to and how little thought is given to their security. After waging fierce battle on the server side, many applications make it to the shores of the Internet only to be shot down by hackers on speedboats.

This happens because poor practices coupled with many misconceptions translate into an incredibly porous security wall safeguarding applications.

The dangers of the web are many and are constantly evolving, so the secret in securing applications is to stick to principles rather than techniques or tricks that become obsolete in short order.

Principle 1 : Don't Give Away The Store
I am not advocating security through obscurity here — that would be very foolish. But I really think that not suppressing debugging error messages at least makes your site more tempting to a hacker — and may even make your application more vulnerable by airing too many details of your server configuration.

Principle 2 : Don't Trust Any Client To Validate or Scrub Data For You
I am a strong believer in client side validation, but I am extremely aware that clients are easily manipulated. Got that? EASILY MANIPULATED! This goes not just for form data, it also applies to cookies, AJAX requests and hidden fields [including the viewstate, for you .NET types].

Some years ago, I recall reading an interview with a famous hacker (I think it was Kevin Mitnick) who said that his favorite hacking tool was a web browser. That was several years ago. Browsers have become more sophisticated and powerful since.

In the trenches, I have had the opportunity to see first hand how much damage a skillful hacker can do using a browser. All this is to say that, the only reason to validate on the client side should be to offload server cycles and for a better user experience. There must be validation on the server side and it needs to be more rigorous.

Principle 3 : Remain Vigilant
Check your logs and raw data from time to time. Look for anomalous inputs or malware signatures. The hacker that failed to break in today is likely to try again tomorrow.

Subscribe to security bulletins. Learn about the vulnerabilities and attacks and keep an eye out for trouble. Nothing beats an early warning system.

Principle 4 : Think Ahead
Look for vulnerabilities in your own applications. It is a million times better for you to find a exploit than for a hacker to do so. Poke your code, be creative with QA scripts. Try to get white hat hackers to take a crack at it. This will pay off in the short and long run.

Principle 5 : Stay Up To Date
In as much as practicable, keep your libraries and compilers up to date. Don't overdo it so much as to be running Beta builds either. I like to say be "cutting edge" not "bleeding edge."

Use the latest stable versions of your platforms of choice, but be careful not to be faddish. Faddish shops also get hacked because they venture into using untested products whose vulnerabilities may not even be known to the manufacturers yet.


In Sum
These five principles, applied to every stage of software development and subsequent maintenance will allow you to guarantee your users a secure application with outstanding uptime.

IT Myths

The IT, world like many other environs, is full of urban legends. We sometimes call them FUD and sometimes we call them "commissioned studies". Some are older than others, but every now and then they come around to haunt us in one form or another.

Linux is Free
This myth is spread by many well-intentioned free software advocates unaware that such blatant lie only harms its chances of adoption. Like with most myths there is some truth to this one: Linux is free for individuals who wish to experiment with it, but not for enterprises.

Even if a company decides to download free binaries from the Internet, switching to Linux will have its costs. The question to ask is how much less will it cost than choosing the alternative. Linux is a fix-and-forget kind of OS solution whose cost of adoption is biggest upfront — but with reduced maintenance over the life of its deployment. This fact sometimes leads many to conclude that it's adoption is costlier when it might in fact be significantly lower than a competing deployment whose upfront costs are nominally lower.

Windows OSs Have The Highest ROI and/or Lowest TCO
This one has been expressed by a number of "consulting" firms in their "commission studies" and is the result of creative number crunching. I don't mean necessarily that Windows OS costs are the worst in the business, just that they are not the shoo-ins many suggest.

For one, it is patently dishonest to suggest that all Microsoft to Microsoft transitions are friction free and require little or no re-training — wake up and smell the Vista, man! This is also an issue with Office 2003 and Office 2007 — fortunately there is third-party software to help with that one.

Another bit of dishonesty occurs when "consultants" fail to work into the Windows OS TCO/ROI the labor costs of things like Patch Tuesday as well as the high cost of antivirus software.

Macs Are Expensive
Again, the only way one can make such an assertion is to only factor in the sticker price. However such an assertion is the equivalent of suggesting to the police department that they could save on costs by dropping the training requirement from their recruits.
Macs may have a higher sticker price, but have lower maintenance costs than PCs and have a longer useful life. This needs to be factored into any serious study.

Now That We Have Java/C#, Who Needs C/C++?
Back in the 80s there were those using this same kind of logic to predict the demise of the different flavors of Assembler. Now the the tunnel visionaries have set their sites on virtual machine code. Please, guys, stop embarrassing yourselves. Another similar claim is the prediction made almost yearly that UML will replace all computer languages — please!

Relational Databases Are On Their Way Out
I think I've been hearing this one for the past 10 years. The telling part is that those who usually say such things have never had to setup a data store. Relational databases not only reflect the way we think about things, but they also allow us some of the best ways to scale massive amounts of information.

Open Source Software Is Not Fit For The Enterprise
This is one is usually spread by some well-know very ridiculous actors. The interesting part is that one of the most important professions in the world, Civil Engineering, follows pretty much an open source model. Anyone wants to say that Civil Engineering is not fit for the enterprise?

The Bottom Line
Information Technology is supposed to be based on science. Heeding rumors leads many companies to burn large amounts of money on useless pursuits. In the business world the most successful companies make solid IT investments and the losers fumble in this area. Success means not only thinking outside the box, but listening outside the echo chamber as well.

Wednesday, March 4, 2009

Errata and More Musings On Open Source

In my last posting I pitted open source developers against "commercial software developers." Then, someone pointed out to me that many open source projects are commercial and their developers are no less competitive than the cube dwellers at Adobe or Microsoft. So, I apologize for using such imprecise language.

Back to coding. I don't think I emphasized enough in my last post how much eating of one's own dog food goes on in the open source world.

A shining example is the Apache Foundation. Their product is rock solid because they pound it probably more than their most demanding users — I guess their deal with IBM didn't hurt either, but they were good long before that.

Some proprietary software companies have tentatively gotten some of their developers to go into the trenches and observe their users in a natural setting and/or become users themselves.

No only do I think this is a nice "year on the farm" experience with incalculable returns in social currency, but I think it really makes developers better at what they do. Could it be that civil engineers are so good because they have to drive on their work to get to their offices?

Imagine how much insight you would gain on the quality of your POS software if you were to spend a week operating the checkout machine a local retail outlet. Such an experience is so richly textured and layered it almost defies description, except to say that it benefits all the parties involved.

Sadly, I have seen so many times cases where developers are not only not required to use the products they play a part in developing but they are almost forbidden to do so.

I am afraid we many times hurt our business through too much specialization. Not-my-department-ism is making us myopic and leading us down a path where barely have a theoretical grasp of what our software does.

In my work I have run into applications with very obvious bugs that only emerge during real-world use but are elusive to QA scripts.

Speaking of QA, we need to understand that Quality Assurance is an inexact science. As software becomes more complex and usage scenarios become more varied, the predictive nature of QA scripts becomes all the more lacking in reliability.

Real world testing, eating our own dog food, living off the land, whatever we call it is a necessary part of quality work. We need to use our products as users if we are to develop truly best-of-breed software.

Monday, March 2, 2009

Why You Should Watch Open Source Even If You Don't Care For It

Although most of the software I have written is what could be characterized as proprietary, I have followed many open source projects with interest and have come to admire not just the quality of the code but the process whereby it comes into existence.

Developers whose disdain for open source prevents them from coming into contact with it cheat themselves. This is hardly new knowledge; however, my deductions and learning from open source &0151; in as much as they reflect my unique circumstances and propinquities— may prove useful to you, so I invite you to read on.

Refactoring
People who write commercial software, even those who are the staunchest advocates of refactoring, do not give refactoring its dues. Even in the most refactoring-friendly shops, this is done as a one-off process and never acknowledged as benefitting more than the project at hand.

In the open source world however, refactoring is given more attention because, to paraphrase the famous saying, refactoring anywhere is boost to good code everywhere. My favorite example is how the Linux kernel code has been systematically rewritten even when, as a product, it has consistently outperformed its nearest competitor for a very long time.

So, why do open source developers refactor more? DaVinci-like artistic pride in their work (which is always open for the world to examine) is a prima facie motive.

However, I think there is also a realization among these developers that by improving on earlier work, they improve on their skills as developers and hence their chances of writing better code in the future.

This, I think is a philosophical stance that any development team would profit from. Granted that there are release cycles and deadlines to satisfy, developers can still plan for refactoring. This might be achieved by regularizing schedules and avoiding, in as much as possible, the extremes that make developers go from consecutive all-nighters around release date to WOW-ridden time-killing days.

I know overtime and comp time are sometimes inevitable, but they should be the exception to the rule. A good project manager or head developer should plan for professional development during bench time with as much care as he/she would budget project-earmarked time. Under such circumstances, enhanced refactoring can be worked in quite easily. Google's 20%-time rule is perhaps the most creative way to achieve this.

Code Review
In the commercial software world the very phrase code review has almost the same connotation as administrative leave except that the latter is less tedious. This is very unfortunate especially since it hurts quality and promotes more bugs than any other practice I can imagine.

Agile shops have helped to remove some of the stigma associated with the phrase, but it is the open source world that has led the way. Open source developers welcome code review the way painters prepare to display their work in a swanky gallery. Why is this?


After monitoring discussion threads for couple open source projects, I have to come to realize that the reason for this confidence might stem from the following factors I've observed :


  1. bad code and its correction is viewed as a learning experience for all as opposed to an embarrassment to the originator

  2. there is no shame in needing correction and a developer whose code was corrected is still allowed to or encouraged to correct the code of others if necessary


  3. there is a pervasive sense of ownership in and responsibility for "the application" as opposed to one's piece alone

  4. open source developers use their continuous code-review process as a means to collaborate — not recriminate





QA and Testing

Sometimes, inevitably so, despite the best efforts to separate testers from developers, there is pressure on the QA team to certify software too quickly. Understandably, this pressure to ship or release by a certain date is due to the fact that delays cost money.

However, a better practice would be to prioritize QA reports and provide graded certifications so that a product can still ship on time with an assurance given to customers that there will be reasonable number of updates included with the original price of the software product. Besides, updates and downloadable patches are quite inexpensive to provide over the Internet.

One reason many open source products have far fewer bugs than their commercial counterparts is that, in the open source world, bugs are sought feverishly and expected as part of the process. In the commercial world, however, I am afraid bugs are seen as nuisances and departures from "the original plan."

This transparency might seem self-defeating on the surface, imagine how devastating it would be if a disgruntled ex-employee were to leak the truth to the press!

Before anyone mentions Windows Vista, let me hasten to say that I do not condone the release of incomplete products to paying customers. Buyers have a right to expect a certain finish and polish to commercial software products, but they might be more likely to accept certain limitations with the understanding that there are a few minor fixes pending — remember the "we owe" sheet you had to sign at closing when you bought your last home?

Division of Labor
Project managers for both commercial and open source software projects subscribe to the principle of labor division. However, open source projects seem to benefit more from its application. Why is this?

It appears that open source projects are a lot more flexible in how they are structured. In other words, people are willing to drop previously assigned action items and take on new ones as circumstances dictate. Egos seem to be less hypersensitive and players are more flexible.

The reason commercial shops are so inflexible has less to do with IT that with HR. It is a shameful display of incompetence on the part of IT managers that they have not realized that HR titles don't always fit an individual's true capabilities or the specific needs of a project. I would be more forgiving of IT managers if there weren't other fields in which the same issue is handled in a much better way.

In the movie industry, for example, the director may play parts in a scene which, in turn, might be directed by another person. The finished movie is more important than nominal titles and no one feels cheated by such practical reassignments. Likewise, in the software industry, we need to be able to reorganize a team as many times and in as many ways as we need to create a truly best-of-breed product — titles be damned!


Bottom Line
Regardless of whether you see open source as the enemy or as a complement to commercial software, you stand to gain from learning the secrets to the success of projects that have bested competing teams with a lot more resources. These are true code gladiators whose work bears some examining.

I encourage all developers to look at some of the code out there. Every SourceForge project has a link to the code; and the home pages of the projects listed may have links to their respective discussion boards. Failing that, a Google search or the Wikipedia entry for the application in question will do the trick.

Code on!

Wednesday, February 25, 2009

Geothermal Heating/Cooling

Geothermal heating/cooling systems use the earth as a heat exchanger and thereby provides better climate control at a reduced cost.

It is clear that such systems cost less to upkeep and appear to be less prone to failure that traditional climate control. However, the cost of acquisition is steep and these systems do not work in all geographic regions. Yet, these systems are worth looking at it would be well worth it if builders familiarized themselves with them.

I am however uncomfortable with a trend that has become increasingly prevalent among proponents of "green technologies." They tend to factor in Government subsidies or tax credits into the economics of their justification. This approach, while tempting, is a little like bribing your son to take his sister's best friend to the prom. The bribe invalidates all other value propositions and makes suspect the subject of promotion, be it a plain-looking prom date or a "green technology."

No matter the cost, any worthwhile technology will have first adopters. If it proves itself its use spreads and its cost goes down; this is how the market has always worked. Remember how PCs spread despite there being not subsidy or tax break for them form most of their history? Did Bluetooth or USB require subsidies to spread?

The Year of Living Digitally

2009 being the year of the digital transition, I think this might be a good time to pick a few bones with companies (mostly media firms) whose corporate culture has not kept up with the times.

HBO

HBO's issue with the 16x9 aspect ratio bothers me to no end. It's hard to imagine that America's premium cable network sends out a signal designed for people who bought their television sets before the turn of the millennium. What kind of preview monitors do these people use?

Anyone who tries to watch the HBO HD feed is in for a very amateurish viewing experience. Not only does HBO not know how to transition from programs or spots with a 4x3 aspect ratio, they seem unable to produce HD spots for HD shows.

As a result, instead of enticing me to watch whatever show they are trying to promote, I am infuriated by the ridiculous letter-boxed, pillar-boxed, jagged-edged, sub-par image displayed in the middle of a huge black background. Its a crying shame that HBO's promotional department seems unable to produce video whose quality is worse than what a teenage geek can produce in his/her parents' basement.

Maybe HBO should send their producers to intern at ESPN. They have done an excellent job with transitioning from the 4x3 to the 16x9 aspect ratio and output a consistent, high-quality HD signal.


History Channel, CNN, Fox News, and others

These other networks deserve calling out simply because they give HD and/or the 16x9 aspect ratio a bad name. The last time I checked, Fox News appears to have an issue with providing any HD or 16x9 aspect ratio programming.

CNN and the History Channel, on the other hand, think that it is okay to stretch their 4x3 images to fake 19x9 HD video. Let me begin by saying, there is nothing wrong with pillar-boxing provided it is done right. In fact, these services could use the extra screen real estate for crawlers and even ads. Whatever they do, they should stop presenting distorted video — this used to be a sign of a broken TV set.

All these cable outlets need to realize that a significant chunk of their audience is watching their signal on newly bought HD screens with a 16x9 aspect ratio and that nobody wants to watch a crappy signal after forking out large sums of money for the experience.

I doubt I am alone in my practice of reserving my 60-inch HD screen for only material that justifies the viewing experience. As a result, my viewership of these networks has decreased not in small part due to how jarring their images appear on a high quality screen.

Monday, February 2, 2009

Why The Digital Transition Must Not Be Delayed

The latest figure I have for TV stations in the US is 1773 — not including low power outfits. Even assuming average transmitter output of 10,000 watts, the impact of such consumption is considerable : well over 20 megawatts at least. This is by no means a trivial power requirement.

Now imagine if we doubled the amount — which is what we have to do to account for the fact that all commercial stations are currently broadcasting both analog and digital signals. As costly as this is, commercial television broadcasters have had plenty of time to incorporate into their budgets the dual power requirement of parallel broadcast transmissions. However, an unplanned extension of the deadline is likely to bankrupt smaller TV operators — the kind most likely to hire locally — and cause further unemployment in smaller markets. The other victims of such a delay would be the tower and transmitter service companies that were looking forward to drumming up some business around transition time.

So, in order to accommodate a lackadaisical and markedly small segment of the market we are willing to continue to weigh down on the power grid and further jeopardize an embattled industry? Come on Congress, that wouldn't be smart at all.

It is true that there appear to be some 6.5 million homes unable to receive digital television. However, we do not know with any certainty whether this deficiency is caused by poverty, disinterest or procrastination. Although the government has run out of coupons, many people have allowed their coupons to expire. I think these people would be a lot more motivated if they could not get any signal on their TV sets and found out from their friends what they need to do. As for the converter boxes, I believe their prices will drop making the coupons unnecessary.

I remember reading on an online forum that in Europe,where there was no coupon program, comparable boxes are sold for the equivalent of almost half the $40 US sticker price. Besides, if the government is so concerned about these households they could allocate some money for low interest loans to people wishing to buy the cheaper digital sets and/or converter boxes. Considering that the alternative is so costly to TV broadcasters, I think even they would welcome an alternative that would involve them pitching in to a fund to upgrade analog homes in their respective markets.

Look at cell phones. Look how we have migrated from analog to digital cellphone service without coupon programs or anything like them. To quote President Obama, "yes we can."

Superbowl Snafus

NBC regaled the nation with a few glitches in the broadcast of the Superbowl. After so many years of the roster, they were a little rusty putting on the show.

Comcast's dubious technical credentials were once again highlighted by 30 seconds of full frontal nudity it served its non-digital subscribers in parts of Arizona. This reminds me of two of my gripes with the cable giant:


  1. Non-digital package subscribers are treated as second-class citizens.

  2. There is no attention paid to customer service -- even the DMV beats them in this area.



My question to Comcast is, whatever happened to quality control? It is true that their digital service is slightly better than their analog feed, but not by much. I remember seeing snow in my HD feeds. Plus the hu m bars in their inDemand offerings render many of them unwatchable on a big screen.

Along the lines of Q.O.S. (Quality of Service), considering that most subscribers use the analog feed, some for their primary receivers and some for secondary units, wouldn't it make sense for Comcast to persuade them to upgrade through quality offerings.

This is a lesson satellite providers like DirectTV and Dish have learned. The reason their subscribers happily upgrade from a $29 basic package to an $80 enhanced lineup or even $400 sports pass, is that users are habituated to associated the service with high quality images and sound.

When dropped cable for DirectTV, my biggest surprise was how much more local television I was watching. The reason was that even local channels were more watchable. DirectTV manages to give me a better signal from my local stations than locally-based Comcast -- imagine that!

And Service, Comcasts continues to offer limited telephone support with some of the least cooperative operators. I hope this latest x-rated snafu causes more people to abandon their service and cause the company to shape up. My main complaint with Comcast is that, by being the largest cable operator, they give all cable providers a bad name. The truth is that there are some pretty good cable companies out there I've had positive experiences with the likes of Cox and Adelphia.
Although, I still have some bones to pick with industry effort known as CableLabs, I believe the problem with sub-par players like Comcast is their corporate culture and lack of commitment to service. Shame on you, Comcast!

Wednesday, January 7, 2009

Watchers Vs Readers

Over 500 years after its invention, movable type printing is about to be reinvented. The big problem is that the clarity of vision that surrounded Gutenberg's work appears at times to be diluted to the point of endangering the endeavor.

When the printing press was invented back 1539, the objective was clear and simple : make it possible to print multiple copies of printed material inexpensively and relatively fast. Done and done!

Since then al we have had to do is redefine inexpensive and fast : paper has gotten cheaper and printing technologies have gotten more and more sophisticated.

The technology that has come to be know as ePaper should, at least initially, have the simple objective of replacing its predecessor. An electronic book that is indistinguishable from a conventional book in ease of reading, portability and energy requirements should be objective number one. Unfortunately, many of the companies involved in developing the technology appear to be getting sidetracked. I think I know why.

As comedian Bill Cosby used to say, you are going to appreciate this work I am doing for you. I think objective number one in every effort is to know one's primary audience. I know, I know, sometimes a secondary audience overtakes the primary one; however, the overtaking always occurs in the context of targeting the primary audience anyway.

Some time ago I had an argument with a friend who could not fathom my enthusiasm about recent developments in electronic paper technology (slowly getting better and cheaper). He felt that it was a useless effort and could not see himself buying any device that used such a technology. Now, this friend of mine is no Luddite; he is an avid technology buyer and is usually excited about the latest offerings to come out of CES or Silicon Valley.

So why was he indifferent -- to the point of derision-- to the prospect of eight-ounce books that could store a whole bookshelf's worth of literature? Because, he belongs to the group that I have come to call "The Watchers". The other group I call "The Readers". Ebook or ePaper technologies are not for "The Watchers", they are for "The Readers." Why?

Watchers are people who, for one reason or another, never learned the joy of reading. They read very well, but they do not relish the practice; they see it as work or a poor substitute for whatever the reading material describes. These are people who rarely if ever read during a vacation. They are the ones usually watching movies on their iPods on the train or while waiting for it, and will prefer a bad movie adaptation over the drudgery of actually reading the book. Watchers complain about the number of pages in a book and see graduation from college as the end of the need to read anything cover-to-cover.

Readers, on the other hand. Use their iPods mostly for listening to music while reading a book, magazine or newspaper on the train. Readers buy and collect books. Reader's judge movie adaptations by how faithfully they adhere to the spirit of the book. Readers use vacation time as an opportunity to catch up on their reading. Readers are the people you want to target with any electronic paper or book technology because they will value it if it is as user-friendly as what it is trying to replace.

I don't think I need to mention at this point that I consider myself a reader. I look for an eBook to be as easy on my eyes and my hands as a book. Many so-called eBook readers are just stripped-down tablet PCs. Book-lovers do not really want that. Readers want something you can cuddly up with (as you would with your favorite book) but capable of holding hundreds or thousand of books.

The ideal eBook will either be powered by light (i.e. solar) or will have a an unobtrusive battery that lasts for many years. As a member of the reader class, I want my ePaper to get right everything that actual paper has gotten right. I also want ePaper to be comparably priced. We would not expect an eReader to be priced like a hardcover book, but neither do we want it to be so major a purchase that it would be unattractive to the average buyer. For example, it is ridiculous to compare the price of an eBook reader with that of a hard-bound encyclopedia because no family ever bough a separate set for each child and most single people traditionally rely on libraries or the Internet.

I think a good price point for eBook readers should be that of cell phones -- currently starting at $50 in the absence of a promotion. This initial price should include at least 4 titles chosen by the buyer with subsequent titles sold for less that their paperback editions ($5-up sounds fair to me). We are still several months (or years) away from this, but I think it is important to state the kind of objective to shoot for.

Once that first goal is achieved, then other so-called enhancements can be added. This is what makes the development of the iPod so admirable. The iPod's first goal was to replace the portable CD Player. Once this was achieved, then came video and games, wi-fi and even an attached cell phone.

In the same way, a good development path for ePaper or eBook readers should be:
First, the triumvirate of look and feel, cost, and long battery life -- with search (in lieu of an index) and bookmarking (in lieu of dog-earing) thrown in for good measure.
Second, wi-fi and/or blue tooth would be a great first addition, especially for the sake of periodicals.
Third, color and high resolution photography would really sell the new technology to art lovers and the books-with-pictures crowd.
Fourth, animation and maybe sound should be the last thing on the list and should not be implemented sparsely. In other words, it should not be another movie player; people who want portable movie players (a.k.a. Watchers) already have plenty of choices out there. Ebook/ePaper makers need to take care of their base first.


I know, I know, I have not mentioned existing products that already feature some of the characteristics I've listed. This has been on purpose.

The two most popular products out there, the Sony eBook reader (and its clones) and Amazon's Kindle are, in my opinion, modified tablet PCs or PDAs -- choose your epithet. I think it's good that they exist, but I see them as transitory devices that appeal to gadget lovers. For ePaper or eBook readers to be successful they must appeal to actual book lovers. These devices are to the new technology what the Pony Express was to the telephone -- appetite whetters.

The reality is that, even an IT person like myself (and even more so a regular civilian) would want an eBook reader that behaves more like a traditional book or magazine than like another laptop.

Plastic Logic is working on a product that is starting to look like the future of books. For one it looks more like a legal pad than a tablet PC and promises a more paper-like experience. It is still not flexible or foldable (two nice-to-have features) but I would have not trouble tucking it into the side pocket of my bag or briefcase next to the latest issue of The Economist (or, rather, in it's place).

So, my call to all current and future eBook/ePaper industry players is that they cater to their base. Make your products appeal to people who read, not gamers or movie fans. There are enough of us to give you the push you need. There will always be a place for printed matter; make sure you put down your stakes in just such a place.

Tuesday, January 6, 2009

A Different Tune

A couple months ago, I slammed Comcast for their dishonest ads. I don't know whether one of their attorneys read my post, but the last Comcast I listened to on the radio used the word "web" in connection with their suggestion of "unlimited access."

While I still don't recommend Comcast for anything (their digital channels are as bad as the technology will allow; and their Internet access is the worst around) I must commend them for at least adding that CYA to their ads. Now, if after hearing one of their recent radio commercials, still insist on using Comcast for your Internet access, you have only yourself to blame if your VPN or FTP or BitTorrent connections don't work as expected.