Thursday, December 25, 2008

Connecting to Server WhatsItsName

This being the holiday season and most of you doing your hardest not to think of work -- unless you are on call in which case nothing I write here can top whatever the pager on your hip says— I will deal with a relatively light topic but still one of relevance: server names.

In the early days of the Internet —say, in the late 80s and early 90s— when most servers were located on university campuses they were usually named after cartoon characters such as Snoopy.

Larger businesses preferred geographic names and used them in conjunction with departmental naming. Thus a server that was primarily used by accounting in Denver would be named something like denver_accounting; if they got a second server it would be named denver_accounting2 and so on.

These schemes appeared to work for a while, but increases in the number of physical servers combined with the surge in virtualization —not to mention the rise of mixed use servers or cross-department applications — has all but exhausted current naming schemes.

The best proof of this are the bizarre names popping up in current IT server farms. For example, some administrators just use the default name suggested by the OS on install. This results in names that are meaningless and difficult to remember such as GHXF13M or some such.

There are many articles out there that give all kinds of reasons why a server's name should reflect purpose. However, given the rate of change in modern organizations this would mean changing names for servers fairly often which would add to the network administrators list of tasks unnecessarily. The only time such a naming convention makes sense would be if the server in question is virtual and it has a specialized use. Servers that have mixed use are better off having memorable names that fit within a scalable scheme.

Another thing to remember is that, given the litigiousness of our society the the naming scheme needs to be neutral enough to not offend most people. An online article I read, described how administrators at a certain company would name their new servers based on how they felt after about the food served on a given day at their company cafeteria; so they ended up with names like "Squishy". Apart from the limited scalability of such a scheme, the potential for litigation is big enough to make it unsuitable.

So far we have fleshed out 3 principles for creating a server naming convention:
(1) User-friendliness or memorable names
(2) Scalability or a long enough list
(3) Reasonably non-offensive or prone to litigation
What remains now is to examine how we can implement each in practical terms.

1. User-friendly names
The most common non-human names we use on a daily basis are street names. The most easily remembered yet common street names correspond to names of individuals or places. The Washington, DC, is famous for using names of states on many of their streets. Names of insects might also fly, pun intended, except perhaps for some of the more queasiness-inducing ones.

2. Scalability
The main problem with rule number two is that it is in direct conflict with rule number 1. Obviously, the use of US state and territory names is not a good idea because there are less than 100. County names might be a good idea since they number over 3,000. However, no matter how big the name set used, it will have a finite limit and there ought to be a plan B for when the first set is exhausted. You should probably have on hand at least three sets of names to be used in turn as the preceding one becomes exhausted.

3. Reasonably non-offensive
Before I identify sources of names, I would like to suggest that such sources be documented for two reasons :
(a) as guide to your successors
(b) as a counter-argument to anyone who might deem a name as offensive
Now, lets turn to the sources of name sets. The US Census bureau has some wonderful data files that one can download from their web site and use free of charge or legal constraint . I especially recommend the Gazetteer section. A quick glance shows the following data sets with perfectly extractable names:
Counties (3,141 records)
MCDs [Minor Civil Divisions] (36,289 records)
Places (23,789 records)
Zips (29,470 records)

Online phone books are useful as long as you only use last names, and so are baby name sites, although I would recommend staying away from the more common ones. Even sites not usually associated with lists, such as Wikipedia, can be sources of lists that are long enough and have names that are easy enough to remember.

Now with all these names to administer you are going to need a good database to make sure there are no mixups. LDAP and/or Active Directory or whatever equivalent you have at your firm are good for basic management and search. However, for better navigability and even to store unassigned names for easy accessibility and quick implementation, an external database with an easy to use web interface would be best. Ideally the server management software should keep both databases (LDAP/AD an the server list database) synchronized and allow for search based on any combination of criteria.

In the end this should make the management of your server farms easier at a time when server proliferation is such that a company need not be named Google to list thousands of servers in its farm or farms.

I would go on to speak of workstation names, but they are of less concern. Besides, there is software out there that can do that almost automatically (such as the Windows 2000 Remote Installation Service). However, if you still need ideas for workstation names I would point you to the baby names sites for an easy source.













but trends that I have noticed suggest to me that we need a more comprehensive and scalable naming conventions than have previously been used

Some of the larger corporations have a way of

Monday, December 8, 2008

Patterns and Other Buzzwords

Recently I was reading another blog post by way of Slashdot about MVC which reminded me of a pet peeve I think is worth bringing up.

Let me begin by saying that architectural patterns like MVC or software design patterns like Singleton are a wonderful way to encapsulate high or mid-level software structures for easy re-use or combination.

Like plays in an NFL team's playbook, they provide the software developer with prepackaged strategies or tools to tackle a particular problem. Therefore I can only recommend that developers learn all they can about as many patterns as they can and stretch their brains figuring out under what conditions each could best be utilized.

What is sickening, if not disheartening, is to see such great concepts reduced to buzzwords. Too many developers plan their projects with merely the buzzwords in mind — damned be reliability, adequacy, security, scalability or the customer/user's needs. Its all about saying that they got to use the Facade pattern combined with the Abstract Factory along with the Observer, the Visitor and the Active Object patterns and ... oh, it was all very MVC.

This is putting the cart ahead of the horse. The right way to do it is to begin by assessing what needs to be done and then choosing the right tools for the job.

In the same way that we are careful about hardware provisioning and platform, IDE, and language choice, we should plan our choice or patterns to match the desired result. This might mean that mid-project we may have to dump some patterns chosen earlier and enlist others. It may mean that we may have to increase the complexity of the project or simplify it —usually the latter. The point is the finished product should be the focus of all our efforts. The cult of buzzwords can only hurt the quality of our work.

And before I go, I should remind everyone that no amount of buzzwords can replace well-written code and proper documentation. These two cannot be emphasized enough. Many developers enjoy indulging in obfuscation without realizing that they are making it easier for the next guy to blame them even for new bugs introduced into their murky code.

So, do yourself a favor, forget the buzzwords. Choose the patterns that suit your project or invent your own if there is none that can fit — hard to imagine but the existing patterns had to be invented at some point. Refactor as much as you can — think of refactoring as preemptive debugging (if often is). Document as much as you can, either through a tool or by means of helpful comments in code; often an older you will be your most avid reader.

And finally, learn as much as you can about design patterns, especially their strengths and weaknesses. Patterns are very powerful if used judiciously, but disastrous when used carelessly. Now go change the world with your code.

Thursday, November 20, 2008

Misleading Figures

Many a project manager is pressed to, or concerned about, measuring developer productivity and some reach for the lowest hanging fruit : lines of code. It is unbelievable he kind of things that still go on some IT shops!

This is not only an imprecise metric in encourages code bloat and other bad programming practices while discouraging mainstays of good programming such as debugging and refactoring.

Some who agree with my points above may be devotees of another less metric that is slightly less misleading: the number of builds or check-ins. While it is true that a productive developer is likely to check in and build code often, an unproductive developer can easily game this system to appear most productive.

Yet another way of measuring productivity is by means of bug report or enhancement request tickets. This is probably the most realistic method of gathering developer productivity metrics, but it can also be misleading. In some cases the incident response system is also used for billing a client. This does not necessarily translate — as tempting as it might be.

So, how do you reliably and fairly measure developer productivity? You don't. To put it a better way, you shouldn't have to. A bug tracking/change request tool is very useful but it should be used the way sports teams used game videos — to learn from past experiences and improve performance.

Hold on to that sports analogy because that is a good place to be. A development shop is a team effort and what makes a developer valuable is how much a contribution he/she makes. Although different metrics may point to the top performers, a good project manager needs to be wise enough to know the strengths and weaknesses of each team member and position him/her for maximum impact.

The productivity measured and relied on most should be that of the team. This is not to say that we should give cover to freeloaders and underperformers; these should be dealt with as soon and as severely as possible. This means that sometimes more can be achieved as we realize that software development is more like a barn raising than a marathon.

Saturday, October 11, 2008

Embarrassing Skills

No, this is not a post about shop-lifting or any other such antisocial skill. Besides, this is a blog about technology so expect me to refer to technical skills.

Logic would dictate that any skill remotely related to programming would be a feather in the hat of developers. After all, the more a developer knows the more valuable he/she could potentially be to a project, right? Unfortunately, many who became software developers after the birth of the web have decided that certain programming related skills are so shameful they will feign or pursue ignorance of them at all costs.

This avoidance of knowledge has let to some of the most inefficient and jumbled projects ever. But, before examining what kind of jumble can result from such attitude, let's name some names.

HTML
When presented with this acronym, many a developer will state the obvious: HTML is not a programming language. Bingo! It is not, neither is Postscript or Tex. Now it is kind of ridiculous that members of the generation that grew up making joke websites is so anti-HTML that they refuse to acknowledge the very existence of what has become the fastest growing display markup language. I am not leveraging this criticism of developers whose applications live on the desktop or on embedded systems. Such developers have their own user interface issues to worry about. If you write strictly Swing, MFC, Cocoa, or X11 applications and you do not know the first thing about HTML fine.

I am speaking of developers whose applications live on the web who cannot tell whether their applications are outputting the right kind of markup. That they would consider their ignorance (real or fake) to be a virtue, makes their attitude that more egregious. A J2EE or ASP .NET developer who does not know HTML is like a C++ developer who believes he/she is above learning about #defines or compiler switches.

JavaScript
This one is a favorite language to hate for many J2EE developers. The irony of it all is that, for better or for worse, Java is the closest relative to JavaScript. It's fair to say that JavaScript is as related to (or descended from) Java as Java is to C++. Yet, I have not met any C++ developers who disavow any knowledge or use of Java. For all of its flaws and poor structure, JavaScript is not likely to be replaceable as the web's premier scripting language in the same way that SQL will remain the premier database scripting language way into the future.

XML
This one is more an issue of prejudice since many developers don't seem to understand that, for the foreseeable future, there will always be need for heterogeneous systems to communicate over networks and that binary objects sent over multiple hops are too susceptible to corruption to be reliably transported. Also, in the case of XML, most developers that hate it just don't get it; the principle of serialization is lost on them.

Language
I mean language as in Human languages. The be more specific, written human languages. The truth about our profession is that we are creating systems that interact with humans in more complex ways. These systems will need to be more user-friendly at least and, in some cases, even human-like.

If you are a developer who thinks it is cool to not know stuff, remember that you will suffer from any technology you depend on professionally that you do not understand. Knowledge, my friend, is power — and, on rare occasions, it is also job security.

If you are a lead developer or project manager and you notice any of the above attitudes among your team, nip it in the bud because such behavior will eventually hurt your projects.

Tuesday, October 7, 2008

Be Careful of "Consultants"

As I was reading an article entitled saving energy in the data center on the InfoWorld site today, I was reminded of how IT managers can be misled by the IT Press and so-called consultants. Written by Logan G. Harbaugh, and Entitled 10 IT Power-Saving Myths Debunked, this article itself introduces at least one myth of its own. I am not alone in my assessment, as many of the readers proceeded to shred the author to pieces in their feedback comments.

Here for example is a real gem. Number two says, "It takes too long to cold-start servers to react to spikes in demand. If customers are made to wait, they'll go elsewhere." The author then proceeds to make asinine suggestions such as holding a site hostage until additional servers — which you turned off on his advice — are brought online. He supposes users will sympathize with your cause and wait around until your site is good to go.

Apart from providing some belly-holding, rolling-on-the-floor laughs, this bit of advice is not only naive, but it betrays absolute ignorance as to how data center and servers work.

First of all, Mr IT journalist guy, the users will go elsewhere — even card-carrying Green Party members seeking to buy recycled products will move on and will likely never come back — this is a lesson every e-commerce site owner learns real fast if he/she is to survive.

Secondly, have you any idea how long it takes for a server to boot up and make itself available on the network? Users expect their pages to load in 5 seconds or less.

Regardless of operating system, a data center server needs a good 5 minutes to boot up — and that is on a good day. This does not include integration into a server farm or registering with the load balancer. It is utterly impractical to even suggest that you can keep part of your server farm unplugged and bring it up in time to deal with increased traffic.

But the purpose of this post is not necessarily to beat up on Mr Harbaugh — although he does deserve the beating. Instead I want to focus on all the times and circumstances in which management listens to boneheaded advice from "consultants" or IT Journalists, as our friend Logan here, and hurt their businesses.

This happens so often that it poses a real threat to IT operations. I would include sales reps in this list, but I assume any MBA holder with half a brain is smart enough to realize that sales people will always say what their targets want to hear in order to get their respective commissions.

Consultants, consulting firms and trade publications (the poor manager's consultant) , however, are expected to be on the side of the people who come to them for advice. But it has often been the case that these people fail their trusting followers miserably.

If these failures were the product of human imperfections, I would not be so upset; but they are instead caused by other factors that should not be allowed to go unchecked, Here are a few along with — and you will love me for this — ways to recognize when you are being had.

1) Agendas and/or Causes
No matter how noble and just a cause, trade publications and consultants must reserve their first allegiance to you, their client or customer. When you go to the grocery store to buy milk, you do not expect to get pamphlet on animal cruelty instead of the gallon of the white stuff are there to pick up. You can tell preachy consultants by their lengthy speeches that have nothing to do with the task at hand. As terrible as the situation in Burma is, your server farm should be the topic of conversation in a meeting called for that purpose.

2)Unholy Alliances
Regardless of how good the products in question might be, a consultants or trade publications that take the side of vendors, are no longer what they claim to be. The have become mouthpieces and salespeople.
The job of a consultant (and, hopefully a professional journal) is to review products and services and only recommend the best of breed.
One easy way to tell these salespeople in disguise is by the way they give unqualified praise or criticism to products of a given brand, to the exclusion of all others in their class. This is especially evident when the consultant in question is ignorant of industry practices such as outsourced manufacturing and "badge engineering."

3)Impractical Advice/False Information
You normally trust consultants and IT journalists to provide you with information you do not readily have access to. It is only reasonable that you be able to trust the information they give you. A simple way to protect yourself here is to see what information you can get on facts you already know; if your consultant is wrong there, he/she is likely to be wrong in other areas as well.
Another great way to determine if you are dealing with the real McCoy is to watch out for defensiveness or unwillingness to answer questions you or your staff might have. Good consultants expect to be challenged; they welcome inquiries, are open to suggestions and listen a lot.
If your consultant is unresponsive and full of hubris, consider cutting your losses early. By the way, always protect your shop by placing clauses to deal with such things in your consulting contracts.

4)Information Hoarding and/or Secretiveness
Demand transparency from all you consultants. If a consultant is not transparent in his/her dealings, look elsewhere. Always stress the need for extensive documentation and knowledge transfer on behalf of your staff.
If a consultant insists on keeping you tethered without a really good reason — for example, you have just had a distributed supercomputer cluster deployed and its optimization will take a few months — take it as a clear sign that you should keep looking. This is not to say that you should limit your choices to open source solutions or that you should expect consultants to give away their trade secrets. Rather, what is advocated here is a reasonable delivery of a finished product with enough documentation and training to insure maximum enjoyment for the expected life of the product.


Don't allow yourself to be conned. No need to thank me. That's what I am here for.

Monday, October 6, 2008

Hack me once, shame on you; hack me twice, shame on me

I read a report about a Microsoft programming contest site that got hacked last week. I did a double take because I'd assumed Microsoft was past this sort of thing by now — what with their sites being the most targeted and all. But they keep getting hacked and they keep making excuses.

Although a big embarrassment to themselves, Microsoft is hardly alone in the ranks of the hacked. So instead of focusing on the Redmond giant alone, I want to dedicate this piece to all the shops who've had their sites hacked repeatedly and have taken no significant steps to stop the problem.

The truth is that it is possible to run bulletproof sites. I can confidently state that there is enough technology and expertise out there to stop 99% of attacks lobbed at a site and preempt the other 1%. I know this because I have been involved setting up secure sites and detecting potential attacks.

I know that, although there might be attacks that sneak through, a properly architectured site can handle even those that get past the first lines of defense.

The first thing to do in implementing security for an online site or application (or any application for that matter) is to make security part of the fundamental design of both software and hardware configuration.

It is indispensable that a security-oriented culture pervade all IT teams: development, deployment, network, hardware configuration/provisioning, architecture, project management, business analysts, etc. In the same way that we debug and refactor code, we need to make security a basic requirement.

The worse way to implement security is as an afterthought. This may take one of two forms. The most egregious case is when security is only mentioned into after a major breach. This usually leads to some directive to the effect that everyone should "secure" their applications, etc. Developers may add to their projects code that is claimed to fix the problem (which they copied from somewhere or another) and the problem is considered "solved"... until the next breach when the rinse cycle is repeated.

The other method is barely better. I call this the "someone found religion" way of dealing with security. This is when some project manager goes to a security conference, comes back all fired up, and a memo is sent to all developers to "please remember" to add security to their code — as if it were salt for their boiled eggs. Such memoes merely acknowledged before everyone gets back to business as usual.

The way to really tackle security is to deal with it at ever stage of development. From the very first moment that a business analyst hands off a set of requirements to the architect, security has to be the sine-qua-non of every subsequent stage. Every component or feature is to be planned and tested with security in mind.

Developers must be rewarded for finding and caulking security holes. Security needs to be included in every estimate and budget. Security must be the number one show-stopper in every release.

The application should not be signed off on until exhaustive security testing is done. Security is to be part of unit, regression and integration testing. Security should account for no less than 25% of all QA scripts and must be included in the UAT process as we;;.

With such a culture, embarrassing. money-losing breaches will become a thing of the past.

A lot of people might grumble that this position is not practical, that tight deadlines get in the way. Well, here is how you fit security into the real world, reengineer your operation until you can. Many shops have already done it, so there are plenty of examples to learn from.

Maybe the problem is release cycles or division of labor. Perhaps development methodologies are at fault. There could be a need to modify the technology mix or drop inefficient technologies. The truth is that, as our applications gain size and importance, we need to take security as seriously as automakers are required to.

Do you want to be responsible for the next theft of millions of credit card numbers or multimillion dollar losing e-commerce site crash? If you don't want to have to deal with the guilt and embarrassment which no amount of finger pointing will assuage, then you must make security a part of your development process today.

Wednesday, October 1, 2008

Does CAPTCHA Have a Future?

ZDNet and other tech sites are reporting that Microsoft's Hotmail CAPTCHA (Completely Automated Public Turing test to Tell Computers and Humans Apart) has been under attack again, even after the software giant made some changes to reduce the chances of spammers breaking through. The report says the CAPTCHA busting techniques used by the attackers have had a success rate as high as 15% — that's spammer heaven!

Incidents such as these have caused a lot of people to write off CAPTCHA. Unfortunately, unless a company wants to have humans pouring through questionable submissions, there really is no other alternative. Audio CAPTCHA (which is less an alternative than a complement), I am afraid, is actually easier to break than printed CAPTCHA.

I submit that the issue is creativity. Spammers and hackers have busted CAPTCHA more by grunt than by smarts. The odds are somewhat in their favor since they need not have a 100% success rate, while their victims have to be able to beat them every time.

Just like with authentication and credentials, CAPTCHA by itself is breakable with relative ease. However, this ease can be reduced by several orders of magnitude if we increase the number of factors that would generate a positive —remember two-factor authentication?

I have read that many spammers have actually used HAC (Human Assisted Computing) to break CAPTCHA. All the spammers have to do is setup a front site (offering things like movie downloads or pornography) and use the target CAPTCHA as if it were their own own. When the user of the front site passes the CAPTCHA test, the spammer immediately gains access to the target site.

This can be fought in many ways. Right off the bat, you probably want to limit or block image linking from external sites. If this is not possible, you could submit the URL of the form along with the user's CAPTCHA input. Any external site shenanigan will be detected with a simple URL match check.

Beyond getting more from CAPTCHA as it exists today, I would suggest improving on the current technology. For example, a photograph accompanied by questions only a human could answer, such as "which person in this picture appears to be youngest?" Another idea would be to use a short movie and ask the person to describe what is going on, or what action preceded what other action.

Another idea would be to analyze the users behavior to determine if it is a human or a machine. Following mouse movements for as little as 1 second can tell whether a hand is human or a simulated. Most scripts do not produce any mouse movement and if they do it is likely to be stilted.

Another thing technologists and developers should remember is that CAPTCHA is more a concept than a technology — like artificial intelligence. As long as the objective is achieved we can call whatever we do CAPTCHA.

As computers become more sophisticated, we will have to come up with new Shibboleths to entrap non-human users. All it will take is us staying in touch with our creative sides.

Monday, September 29, 2008

A Few Notes on Solar Energy

A few days back I used a news story about a 12 year old who'd invented a more efficient type of solar panel to make a point about software development. But today I want to state a few practical facts about solar energy because I sense that even people who label themselves "pro-environment" or "pro-alternative energy" are ignorant of the true state or value of this remarkable technology.

For example, a myth believed by those who have no practical experience with solar powered devices is that energy is only produced when solar panels are under direct sunlight. The truth is that the reason these cells are also called photovoltaic is precisely that they produce voltage when exposed to photons (light). So even on cloudy days energy is produced, although not at full capacity. This underscores the need for batteries for the panels to provide continuous power. Such an arrangement precludes solar power for energy intensive appliances.


I recall seeing a story on television about an NGO that insisted that the lights, refrigerator and other equipment at a certain African village clinic be completely powered by solar panels — as a condition for their sponsorship I guess. This "solution" caused more troubles than it solved. Refrigerators (which are vital to the operation of clinic in tropical weather) need a lot of wattage, which the solar panels could not supply. A better solution here would have been to set up smart system that would fire up a backup diesel generator whenever the power level fell below a certain watermark.


This brings me to those on the other extreme of solar energy. These are people who will lie and make exaggerated claims to promote solar energy. They do even more damage to the emerging technology's prospects of acceptance because those who have been burned by the misuse of a technology will become its most vicious detractors.

Those wishing to promote solar technology need to tell the truth if they really want to make a difference. And the truth is we really don't need to resort to deception to promote solar power generation. Things are a lot better than many people think.


As someone who has used solar power in my private life, I believe the right approach to solar power is a great boost when used where it would work best. If the solar proposition is good enough, people will be motivated embrace it regardless of whether there is some cache to it, or wether there are tax credits.

For example, installing solar panels on part of the roof of a suburban home could reduce that home's energy bills during the sweltering Summer months —the very time that AC power is needed most on account of the sun itself. This principle could be applied to cars, boats, etc. People will be happy to use solar power as long as it is used within it's limitations and where its practical.

May times I wonder if some of the so-called solar energy promoters really want to turn people off from it — maybe out of elitism, or resentment of consumerism, or whatever.

This is regrettable because solar technology is getting so much better that we might be within range of the day we can perform a lot of key tasks on +90% solar power.

Case in point, it is possible, with current solar technology, to generate a full 50 watts of electricity at 13v using a 25x25-inch solar panel. This would already be enough to reduce the need for car battery power to operate the AC and entertainment console and improve gas mileage somewhat. The price for such a device is between $200 and $300. This might sound like a lot until one considers that car dealerships add such amounts to the price tag for minor accessories like CD players. Imagine if this were an option for you when you went to get your next car. Imagine further if the car in question were a hybrid.
See, there is a lot that solar energy can do for us today if we go about it the right way.

And we are just talking about existing technology. Two inventions from this year make me think that we have seen just the tip of the iceberg.

Back in January, Rice University scientists were able to create a material using carbon nanotubes that could absorb 99.955% of light —the darkest material on earth yet. Solar panels rely on light absorption; so this could boost their efficiency.

To make things even better, twelve-year old William Yuan's invention of 3D solar cells promises to boost solar panel energy output by a factor of 500. The future indeed seems bright. So please, zealots out there, don't get in the way of progress with your misrepresentations. True science needs no propaganda.



Useful Links


1☀ 12-Year-Old May Hold Key to Solar Energy
2☀ 12-year-old Revolutionizes the Solar Cell
3☀ 'Darkest ever' material created
4☀ A Blacker Black: Darkest Known Material Created

Friday, September 26, 2008

What platform?

I have noticed a number of opinion pieces in the technology press whose sole purpose appears to persuade developers to develop software for a given platform or another. The most recent pieces deal with the advent of Android and the ascendancy of the iPhone (both mobile software development platforms). Like with the OS wars that begun in the early 90s (Windows vs Mac vs Linux vs OS/2 vs Solaris), to the browser wars of the late 90s (Netscape vs Microsoft) and the application framework wars of the 00s (Java vs .NET vs LAMP) this latest contest will generate its share of columns, forum discussions, blog posts, flame wars, etc. But it would be foolish for us to not draw lessons from all this that would make us better software engineers.


The platform argument is not too far from the language argument and the principles that apply to the one are just as useful to the other. However, since my focus today is platforms, I will stick to the one term while encouraging you to think of the other as you read along.



As a developer/architect and ofter observing the experiences of many colleagues and technology ventures, I believe three principles should rule our choices because they have been used to ensure success in just about every industry:




  1. The customer is always right.


    This one comes from the world of business and is, perhaps, the least favorite quote of many developers, so it bears some explaining: the customer hands over his/her hard-earned cash to your boss so he/she can pay you; no customer, no money for you. It's that simple.
    Of course you can and should educate customers but, in the end, you have to meet them where they are. This, however, does not need to be too painful to put into practice. After all, no platform stays the same or at the forefront too long anyways. In a sense, the dominant platform is like the weather; if you don't like it, just wait for it to change.





  2. You fight a with the army you have, not with the army you want.


    No doubt from the military world, this principle is that of practicality within constraints. Way to many developers are crybabies who think it is a sign of class to be seeking ways to prove why certain things cannot be done. I would like to remind these developers that the software development is, if anything, "the science of possibility" — not impossibility. Even in the technical world, consumer choice is fickle. Once a platform emerges as dominant, there is very little we can do except meet our customers there. Our value as "scientists of possibility" will be proven by our ability to creatively get around the limitations of the platform du jour.




  3. The end justifies the means.


    Often quoted in the political world, I hesitated to quote this principle because it has often been used to excuse atrocities. A more palatable translation would be "do whatever it takes to get the job done." This principle is not difficult for developers to embrace, however, its application isn't always consistent or well understood. Two terms whose definitions are often muddled are "job" and "done." Not very big words — way below the $64 price point — yet contentious. What is a developer's job and when is it done? A software developer/engineer's job is to provide software in the same way in which a builder delivers completed homes. This means that the developer needs to appreciate the user's point of view — as opposed to ridiculing it, as is often the case— and use whatever platform or tool necessary to deliver a product that meets user acceptance. I am a firm believer in UAT although too many developers hate dealing with it.





Does this mean that developers should not evaluate platforms and develop criteria for evaluating the best? No. In fact I encourage developers to be continuously evaluating platforms and play favorites. This is not only important because, as engineers, we often play the role of consultants but also because we may end up creating our own platforms one day and need to have clear in our minds what makes one platform better than another. So, learn to select and learn to justify your choice using technically sound, logical arguments.

A good way to choose a preferred (or reference) platform is to make a wish list of all the things that a platform should have. Make sure you take into account not just your needs as a developer or your approval as an engineer but the user's/customer's needs as well. Once you have completed this list, use it as your basis for choosing among the existing ones. Or, better yet, build your own — although that would be venturing into venture capital territory and the subject for a whole other post or type of blog.

Thursday, September 25, 2008

COMCAST's Commitment to Disservice

After being slapped by the FCC for protocol discrimination, COMCAST has now set up a new way to throttle Internet usage based on how much bandwidth a user is found to be taking up. They propose to do this every 15 minutes; this is in addition to their previously announced 250GB per subscriber usage cap.

I thought about this and realized that all of these shenanigans take a toll in technology and labor. In other words, their Nickel-and-Diming costs hard dollars. Wouldn't this money be better spent improving their networks?

In their grand tradition of offering poor customer service, this erstwhile cable operator now ISP (and ersatz telephone company to some), is insuring that their customers will go knocking on competitors' doors. They did it with cable programming by driving customers to DISH or DirectTV, and now they are doing the same with Internet access.

I am not saying that COMCAST does not have the right to limit bandwidth. As long as they are upfront about it, I do not object. The problem is that they tout their "high-speed Internet service" suggesting that it is unlimited. When your cellphone provider tells you that weekends are free, how would you feel if they cut you off after you've been on the phone for 4 hours?

All I ask of cable companies is honesty. Stop saying you offer unlimited Internet access; put your cards on the table. State upfront how many gigs the user will be allowed and how much of it per hour they are allowed.

I want some truth in advertising from cable companies such as COMCAST. Instead of putting advertising lipstick on the pig of the crappy service package they offer, how about investing in technologies that will make these services more desirable?


As someone who can count among my mentors a cable television systems engineer, I am not unsympathetic to the technical challenges and regulatory obstacles cable operators face.But the cable television industry is not alone in facing obstacles. The big difference is that cable companies appear to be most resistant to innovation —or decent customer service, for crying out loud!

Satellite and phone companies have been improving their services, but many cable companies, COMCAST chief among them, remain stuck in the 80s. To make matters worse, they appear to recruit their customer service reps from among people who were not nice enough to work at the DMV.

It takes as much effort, if not more, to setup a four TV receiver setup at a suburban home with a satellite dish, receivers and DVRs as it is to do the equivalent with digital cable. Yet, a satellite company can get someone at your place within a 15 minute window no more than a couple days away and get the job done, while COMCAST needs weeks to get you an appointment and impose the dreaded 8am-to-4pm-on-a-workday window once a date is arranged. But I digress.

So, you guys COMCAST, here is what you need to do. In those colorful ads or commercials where you show the smiling families with their shiny new computers, specify at the bottom of the display or mention at the end of the commercial your 250GB upload-download limit and your quarter-hourly connection re-prioritizing — in the same way that pharmaceutical companies list the side effects of of their drugs. Then users will realize that other technologies such as DSL, FiOS, DirectWay or even ISDN might be better propositions.

Wednesday, September 24, 2008

Back to the Basics

Now that the Android mobile platform has come head-to-head with the iPhone, this might be a good time take a brief look into the different philosophical debates that have existed in the software development world over the years. I have always found the passion that has fueled some of the positions to be oxymoronic because our line of work is a scientific one — and science is about logic not emotion (or devotion for that matter).

What science(s) are we dealing with? This might sound like an irrelevant —some would even say foolish — question, but I ask it because I think it helps if we go back to the basics.

Software development, as it exists today, is a combination of mathematics (on which all programming paradigms are based) and physics (on which all hardware is based). Some may argue that chemistry plays a role since the components that make up computers contain chemical compounds; I think that is not critical to our field since, by the time they make into computers, they have already been stabilized and are not expected to change after that. Worthy also of note is the fact that there are experiments underway that could create computers that rely on chemical reactions or biological processes to compute — in which case, we would have to add chemistry and/or biology to the list.

Notice however that, although the second item on the list of constituent sciences might change, the first one remains constant. This means we can invariable refer to computer science as applied mathematics. Something that is essentially a form of mathematics should be as consistently logical and proof-based that arguments should be easily settled, right? Not really.

Even in math departments nationwide human nature and emotion plays a role. I think, therefore I emote — or is it the other way around?— either way, it's all downhill from there .

This tiny problem that is merely noticeable in the math lab, is greater by several other of magnitude in the software world once you throw in marketing departments, non-techie managers, job security considerations, large egos and personal insecurities. As a result, many software development forums sound like the kind of debates one is likely hear when religious leaders meet.

I would like to propose that, as tempers flare and insults get hurled, we remember our roots. Software is based on logic (or mathematics if you prefer). I cannot argue my way out of laws of logic; how then should I expect to argue about software that is is based on logic without being logical?

Remember this the next time you face another discussion that pits proprietary against open source, Windows vs Mac, Linux vs Windows, iPhone vs Android, Blackberry vs Smartphone, C# vs Java, C++ vs C, Haskell vs Fortran, etc. In all these arguments there are facts that are incontrovertible and reasons the way things are they way they are. Usually the foaming in the mouth or the flame wars are a sign that logic —the foundation of our craft — has left the room.

Does this mean that we should never discuss alternatives? Not at all. It means that we must discuss as scientists pursuing a provable goal, using scientific methods.

As with other scientific exchanges, our discussions should have only one goal: to discover the truth at hand. For this to happen we need to confine our arguments to the facts, be honest about the unknowns and be intellectually honest enough to stand corrected when the other side has submitted valid proof. This requires discipline, but so does every worthwhile endeavor — especially a scientific one.

So, please, let's end the religious wars. We are not seminarians or monks, for crying out loud. Let us seek to overcome our personal insecurities and ignorance through self improvement and learning — therapy won't hurt either. Our job is to build the future. What a awesome task!

Tuesday, September 23, 2008

Android vs iPhone

So the first commercial Android phone is being sold by T-Mobile at a price point that's quite close to that of the iPhone. It has a physical pullout keyboard and its features are less in both impressiveness and number (perhaps). So, can Apple rest on it's laurels? Not really. In fact, not at all.


The iPhone is still not in the full graces of developers —especially with third party apps getting rejected all the time for all sorts of reasons— which may cause the number of available applications for it to lag somewhat. The Android, on the other hand, is as open as it could possible be and is likely to attract a huge developer following, which could lead to more people being attracted to the huge number of applications available.

This is only a partial handicap for the iPhone though. Most iPhone users I know, use mostly the Web most of the time that they are not actually speaking on the phone or texting. The Web might be the iPhone's trump card.

Aha! But there is the whole industry-vs-company thing that almost killed Apple back in the 90s. Android will be supported by several manufacturers who will offer competing designs while Apple's iPhone will remain limited to one design —or at least designs from just one company.

The contest is gearing up to be very interesting. I think it would be advantageous to all that there not be a clear winner. After all, wouldn't it be nice to have a choice between phone types the same way we have a choice between car types? Unlike computers, phones need only be able to call and text other phones; no one expects to be sharing software with their friends....yet.

Even if there is an expectation of software sharing between phones, most people are likely to base their purchase decision on handset features (of which applications are a part) instead of on software sharing capabilities — this trend could, of course, change as a new generation of phone users might start transferring computerlike tasks to their phones and bringing with those tasks similar expectations.

Whatever the outcome, Android and iPhone (mobile OS X) are likely to become the dueling titans in the consumer space with RIM's Blackberry and WIndows Mobile vying for the business space. RIM is a clear leader in the business space, but the consumer world is still up for grabs. Moreover, whoever wins among consumers may very well come into the business phone space and unseat the leader there. It happened with computers when Windows gained ground on Solaris and other flavors of UNIX.

Monday, September 22, 2008

Lessons from the Financial Crisis

Yes, Grasshopper, the erosion of your stock portfolio holds valuable lessons that you can apply to recover all that money that you have lost. No, this is not a post on investment strategies — although I would encourage you by all means to get advice on that from a reputable source— instead, I want you to learn how the market's ups and downs are similar to vulnerabilities in the applications you develop.

Wall street has often been compared to a "well-oiled machine" but that is an analogy that does not quite match our line of work. As an application, it is not difficult to see the parallels. Just as as in any enterprise application, financial markets have
✎ Users with different levels of access and privileges : traders, directors, investors, etc.
✎ Different types of data such as stocks, bonds, futures, cash.
✎ Operating modes like day trading, overnight activity, etc.
✎ Metrics or indices such as the Dow Jones Industrial Average, NASDAQ, etc


Just like with any application, the amounts and types of data fluctuate. User actions are unpredictable and the metrics are unpredictable. Despite all the alarming headlines, the financial markets have an outstanding record of correcting themselves. As applications go, Wall Street, with its numerous fail-safe mechanisms, has had an outstanding uptime record — name another application whose latest major malfunction was in 1987.

Speaking of high availability, another very real application that has been very much online since its launch has been Google — which, by the way, has made tons of money on Wall Street, but that's a whole other story. Google's vice president of engineering has stated that the secret of their success has been failure.

You will say, but Google is doing everything but failing and you would be wrong. Google is managing failure to it's advantage. When failure is managed it becomes unqualified success. Apart from it's many projects in every area known to humanity, Google is at heart a data management company — OK, a search company — that deals with the challenge of storing and using large amounts of data despite the very real possibilities of disk, software, power or human failure.

Most companies cobble together a data center and pray nothing falls apart. Google, on the other hand, deploys a server farm and waits tools in hand to deal with the first failure to crop up. Their success in the data center stems from a realistic acceptance of Murphy's Law. A very homey and appropriate analogy would be tending to a baby; you know you are going to have to change diapers so why not stock up.

Why isn't this mindset more widespread in the IT community. Why does Google stand alone on an island of reliability surrounded by an ocean mediocrity? In the software world, especially, why do we continue to design and write software as if everything will always work as expected as if everything else in life did?

Based on experiences and observations I have been able to identify four possible reasons:

  1. Backup plans are a tough sell with management.

  2. Lack of foresight.

  3. Laziness.

  4. The belief that we can scam our way out of this one.


Let's examine each and see how we can deal with it.

Backup plans are a tough sell with management
In an environment where managers seem to do everything quarter to quarter, any plan that deals with long term stability is not easy to get funding for. However, before we hurry to heap all the blame on the suits, let's see why we may have failed to get them on board with our disaster recovery plans.

A good place to start is to go outside of the IT department — a very refreshing thing to do; you should try it sometimes. Lets take a look at the carpenter who is building the new bookshelf in the CFO's office. Notice his/her little "disaster recovery plan" expressed as steel reinforcements strategically hidden from view or jumbo size rivets. Notice how the shelves are double layered with extra columnar support to handle the extra weight of numerous hard-bound volumes. Notice the extra layer of lacker being applied so that it will resist casual scratches and retain its luster several CFO's down the road.

Have you ever wondered how come all of these "extras" were just included in the job once it was approved and there was no "selling" involved in getting the bookshelf reinforced? I know you are thinking to yourself, "this vain CFO has money to beautify his office, but is always penny-pinching with the IT department." But I want you to focus instead and how the cabinet maker presented his bid and how you presented yours.

One of the things that executives like about carpenters, interior decorators and the like is that they get one price and one delivery date — which, by the way, is usually met. Do you think, if the carpenter had given an estimate for the bookshelf without reinforcements and then tried to sell those separately, he would have gotten the necessary money? I don't think so.

The problem we have with software projects is that we ourselves are not sold on the need for security, proper testing, disaster recovery, and such. We tack these onto our project proposals at the last minute and the ambivalence with which we present them tell the stakeholders that these are just geeky nice-to-haves. Management will not back us if we ourselves are not sold on our own plans.


Lack of foresight
With very little variation, I have heard the refrain "this application was supposed to be temporary but we have been using it ever since" so many times that I can say with abundant proof and absolute conviction that software , once deployed, will always last much longer than anticipated.

The scandalous part however, is how non-chalantly many fail to look ahead even with major releases. The usual excuse is that there isn't enough time to engineer a given product properly. I understand; I know what it is to be under the gun. However, what about looking for ways "to oil the machine" once it has been set in motion? In these cases project managers, supervisors and or architects are especially to blame.

In the same way in which the members of the Mario Andretti racing team, when not building a new car, are continuously tuning the existing ones, development teams should be using their downtime to refactor, optimize and prepare code for the next release. A practice that I've observed in every successful development team has been to develop domain-specific APIs, code libraries and widgets that make it easier to build new applications or enhance the current ones.


Laziness
Sorry to put it so bluntly, but I have observed this one too many times. The laziness monster too often rears its head in poor coding practices and a flurry of porous "quick fixes" whose fallout down the road is never a small matter. Does this mean that software will always work no matter what? No. Although it is an ideal that we should aim for, there is always that obscure permutation no one had counted on. However, failures so constant they hardly give time to recover from the previous ones and seriously threaten mission critical applications are a sign of a corrosive IT culture. We need to develop teams of conscientious developers who will handle the companies code with the same integrity that we expect our accountants to manage our investment portfolios and retirement accounts.



The belief that we can scam our way out of this one.
Please, do yourself the favor of never promising something to management just because you know it will make you look good; the suits will eventually find out you lied to them and you will fry sooner or later. Promise deliverables that can actually be delivered. If you are selling to management a bill of goods you know you can't deliver, you are playing the part of a con artist — not an IT professional. Resist the temptation to appear superhuman; instead, listen to the stakeholders and provide them with a solution they can rely on. Tone done the whizbang; focus instead on solving problems and filling needs.


What about deadlines?
This question always comes up when the need for better software is brought up, so I will address it. As a developer/architect sometime supervisor I have come to realize that another thing to be learned from the person building the bookshelf in the CFO's office is that he/she will listen patiently to all the requirements and plans the client has and will then, having clearly discussed the appearance and features of the finished product, plan with him/her for a realistic delivery date.

So why is this not the case when it comes to IT? Many times IT professionals alternate between being either mice who just take notes as the stakeholders build a castle in the clouds or overwhelm the meeting with technical jargon that has little or not connection with the needs at hand. I have found that with managers and users alike, if it is made clear that we are on their side they will accept our timelines and give us room for realistic deadlines.


Remember, IT is different only in technology. Quality and customer service are concepts that apply to our line of work as well as they apply to others. Let's learn from the winners.

Friday, September 19, 2008

Yuan and Natural Design

Let me be cruel, not unnatural...

—William Shakespeare




Twelve-year old William Yuan, of Beaverton, OR, has invented a 3D solar panel that  could potentially absorb hundreds of times more energy than current designs. This of course would be a great boost to the possibilities and uses of the Sun as an energy source.
 

As a software architect/developer I feel vindicated in my belief that natural processes and structures provide us with many models from which to choose. We are better off making use of such models than becoming enmeshed in convoluted algorithms — what I have come to call "unnatural designs".


This is nothing new. Some of the greatest breakthroughs in software and hardware have unabashedly borrowed from nature : Object-Oriented Software Design, the Web, LDAP, LCD technology, computer chips, etc


This should be one of these no-duh concepts that every developer grasps, yet I've had to spend way too much time just selling the concept that a natural process, progression, structure or taxonomy is the best model for solving a software problem.


This might be the result of developers being part of a generation that grew up paying lip service to nature, but not spending time in touch with it or learning about it. Seriously, how many developers appreciate the efficiency and ingenuity to be gleaned from basic natural processes like photosynthesis, the water cycle, genetics, etc.


I am not proposing that software professionals become biologists or physicists — OK, perhaps physics degree wouldn't hurt. I am simply positing that we need get out of our shells and expose ourselves to the outside world especially, that of nature so that we can draw inspiration from designs that have been working reliably for eons. That's what "learning from the best" is all about.


Thursday, September 18, 2008

Solution Files

If you have used any version of Microsoft's Visual Studio IDE you will recognize this post's title as the file type that identifies a project's top level descriptor. In that you would be right. You may also assume this post is about Microsoft development products or platforms; in that, you would be wrong.

The reason is chose the above headline is that I feel it captures the essence of our craft. For the most part, what we as developers deliver to the customer are files. But they represent software that provides solutions. The operative word, of course, is solution(s). The medium of delivery is irrelevant as it might be a hosted service, a memory instance, files on a hard drive or shrink-wrapped media —yes, some people still ship those.

The point of all this is not only to remind ourselves of   which side our bread is buttered on, but of what our definition of "done" or "complete" should be. This might seem redundant until one notices the the large numbers or applications that are marginally useful to their intended users but  contain all sorts of buzz-worthy technologies and widgets. 

A notorious example of this was an in-house time management software my team was fated to use for way too long. It had all sorts of fancy popups, submenus and taxonomy representations yet was unintuitive, difficult to navigate and sacrificed performance on the altar of excessive widgetry. It was application hell. 

This, I am afraid, is what happens when we lose site of what our core job description: provide productivity boosting solutions to user problems. In the same way in which we expect our tools to allow us to do more in less time, our users also want some gain in exchange for the time they invest learning to use our software.  I think that's a fair expectation, don't you?