Friday, October 30, 2009

Artificial Stupidity

Artificial Stupidity (AS - pronounced rudely as you would expect) is an important discipline and I believe a growth area in computer science.

Look at the things we can do with AS:

  • Dumb monsters in games
  • Automating the news reporting at Fox News
  • Anticipating typos or mistakes
  • Turning off computers in the cockpits of airplanes that are overshooting airports
  • Politicians
  • Testing of artificial intelligence
  • Thesis committees for AI doctoral candidates
  • Better ways to annoy MS Word users with silly assumptions about spelling or formatting automation

Actually there are endless applications for AS. You could do a Turing Test that if the human agreed with the AS, we could confirm the human is a politician, mid level manager, or works for Fox News.

There are many places to learn about AS. For example, NASA. Not the space flight NASA, but the National Artificial Stupidity Association at the University of New Mexico.


Unfortunately this is sot something you can get a degree in. You have to specialize, maybe write a book or make it part of your doctoral thesis (writing one book is the equivalent of 300 blogs or a doctoral degree).

The funny thing is that there are already a lot of AS professionals (or is that AS pirates?). There are a lot of smart folks out there, but some are just plain bad. So, write bas AI, you automatically become an expert in AS.

AS's are everywhere! We have been awaiting self aware machine intelligence, but I think we see AS emerging right now all around us. Look at MS Word and its auto-formatting for wonderful example. Even though a lot of software was designed with traditional techniques and no AI or expert system code. Airline sites that pick only pricey tickets, shopping carts that forget what you bought, most applications have become 'self stupid'.

Monday, October 19, 2009

JRuby Trick: Saving time instrumenting Java classes

Ever with you could teach a class a new trick without wrapping or extending capabikities in a new class?

One of the issues I initially had issues with was extending Java to do cool things with Ruby.
First the Java class:

public BoringPig{
public String getName(){
return "name"
}
}

Being a Java guy, my first attempt at adding Ruby to Java looked like this:

class FlyingPig
def initialize(boringJavaObject)
@boringJavaObject = boringJavaObject
end
def flyPiggyFly
# ..... exciting stuff,
puts @boringJavaObject.getName+ " is flying!!!!"
end
end

But this meant I had to create a new Ruby instance. If I have a lot of objects, I ned to visit each one and create my Ruby version.

Instead, here is a cheaper way that simply adds the new method to the existing class

class BoringPig
# adds this method to any existing BoringPig objects or new instances of BoringPig
def flyPiggyFly
# ..... exciting stuff,
puts getName + " is flying!!!!"
end
end

Note that I don't need to fiddle with the initialize anymore. I also don't have a pointer to the object because the instance is the same. the "getName" could have been written as "self.getName", but self is implied.

To summarize, I have made a pig fly without creating a flying pig class.

Essentially I am extending the class rather than wrapping. Because Ruby is much more dynamic about when you add a method to a class definition, I can do this at any time for any reason. This could also mean that I could extend the class depending on the context and what I need to do. This is sort of like adding a visitor method directly to a class definition.

Why else would I do this? Basically I have a ton of Java in a very wordy API. I would like to reduce the churn of code to do things with these classes. For example, I have a class that has a very deep relationship that changes the classes use and context. Rather than create a wrapper for the new use and context, I can simply bubble this up to the methods of the root object. For you UML guys, I am adding stereotype info at the class rather than digging for it deep in the association of the stereotype tags.

Fun trick.

Pretty cool! This also meant my software runs a lot faster and takes a lot less memory.


Saturday, October 10, 2009

Like a Ruby in a Goat's A@@

I have been programmed in Java forever. Of course, there is nothing wrong with that.

Now I am learning and writing Ruby and specifically JRuby. The reason for JRuby is that I am calling the API in MagicDraw to get some things done that I have always wanted to do, but didn't have time to set up NetBeans or 'gasp' Eclipse every few months (I am Chief Architect of MagicDraw, so always working on the latest version).

I am sure that I could be writing Java, but I just don't have the time to do the setup. That's the key. You just run the scripts. No compile, no install, no waiting. There is a lot of pull.

I am also strangely attracted to Ruby (not in that way, the platonic language loving way). A few years ago I saw a lecture by Dave Thomas (of Pragmatic Programmer fame, not the founder of Wendy's), and thought that Ruby seemed compact and a rather useful language. But alas, every attempt to learn Ruby was thwarted by the chaos which is inherent in most languages... Drifting language, few standard libraries, and poor support. This has changed with the introduction of JRuby and the inevitable tread that comes with popularity of a language... sloth.

Now Ruby's pace as a syntax has slowed. Gone are the days that you can't test the latest article or book on your latest version of Ruby. Gone are the days of trying to match the right version of Ruby with some arcane GUI library. Gone are the constant changes to the language because when something is popular, change is discouraged.

There is still change, but it is additive. The GUI can now be based on Swing and I have that one in the bag. The implementation of JRuby runs on any Java JVM, so it works on any platform. Oh joy!

Things are not all good... Ruby is an interpreted language. There is no strict typing. Errors are at best cryptic for the new user. Learning Ruby is simply cool and like torture best left behind in the Bush administration because you'll never get a proper confession from a Ruby error message.

Well, with a presentation looming, I am forcing myself. I need to talk about PRR. PRR... well it is about rules as in expert systems as in artificial (pickled) intelligence and representing said rules in UML. All well and good, but what good are rules if you can't run them? So, comes Ruby to extract the rules and run them in Drools.

Well, very quickly between myself and the great aid of Gerald Meazell, up comes working JRuby extracting UML and spitting Drools! But, then came trying to get Drools to work... I panicked... Then I looked for a rule engine written in Ruby and found Ruleby!

Ruleby is based on the great work of one of my buddies, Dr Charles L. Forgy of Carnegie Mellon University. Dr Forgy invented the Rete algorithm which is simply the reason why expert systems work because it optimized the execution of rules. I'll get into Rete someday, but take my word for it, Rete is cool.

Long story short, it is all working and I am ready for my presentation at the October Rules Fest in Dallas (except for writing the presentation... the easy part). I am using JRuby, Ruleby, and Tenjin for my template language. I can even generate a Drools DRL file!

Ruby is still treating me like a reb head stepchild, but I am learning and Gerald, who is a bonafide and card carrying red headed stepchild, is helping a lot. More to come, even some code. Got great ideas and now I can find the time to script a few.




Sunday, September 13, 2009

The End of Dumb Software = The End of Dumb Thinking

Seth Godin is a marketing god. I say that because he very often exposes the obvious.

In Seth's latest blog, he talks about the End of Dumb Software. His point? Basically there is no reason for the stupidity of today's software. He is right. Sadly...

Seth's example is about the calendar app and mail on the Mac. It applies to any calendar or mail app I have ever used. There is a wealth of data, but the developers don't use it.

Many years ago I designed software that was not dumb. In fact it was what Seth is dreaming about. It understood who was important in your address book. It understood that 2am is not working hours and in fact would understand that you don't make appointments with friends and family during working hours unless it is for lunch or you were on vacation.

Where is this software? In a cardboard box in storage. Ericsson killed it. They couldn't see the utility. You might ask, how could they not see this as great? Well the VP of Ericsson I met was perfectly happy to silence the ringer on his cell phone by popping the battery off the back of the phone. In fact no Ericsson built phone had a way to silence the ringer and send a caller directly to voice mail.

That's why there is so much dumb software. Not that people are dumb, it is that they do not think. There is no analysis. It is epidemic. There should be billions of people like Benjamin Franklin, Albert Einstein, Robert Goddard, leonardo da Vinci, and other great thinkers. The problem is that most people don't explore the world with their minds. They are mentally handicapped by an inability to add two ideas together to create greater ideas.

I'll say this again and again. There are no stupid people, just a lack of people that don't use their brains and maybe never had the skills to think. People are generally lazy. It is not a degradation of their work ethic, they never had a work ethic because most people do as little as possible. That includes thinking and learning. We learn the minimum. Curiosity stops as soon as we get the information we need. The ideas stop when we solve a problem partially. Or worse, we stop thinking when a problem crops up and we don't bother to solve the blocking problem.

Smart software requires smart developers. I'll go farther and say that smart software requires renaissance thinkers. Programming and design is just a couple of your skills. Your primary skill is learning, exploring, curiosity, and invention. Then you can create smart software.

Please, if you have a pulse, please make a promise to yourself to be a renaissance thinker. Study everything. Mix ideas. Don't stop thinking at the happy path. Don't stop thinking when there is a problem. think until the problem is solved.

Wednesday, September 2, 2009

Ending an Argument

I found this a difficult entry to classify. I have several blogs that all deal with psychology and how it affects our decisions and beliefs. I think that computer science certainly deserves this one.

Ending an argument with a thought ending cliche sounds odd, but you have probably heard many of these if you have ever been in an argument with a software developer, manager, or customer. Simply it is a phrase that causes you to give up and not argue. It ends an argument abruptly and does not have a logical response.

Here are a few exmples:
  • "That's a Good Thing"
  • "Just forget it."
  • "...or the terrorists win."
  • "Be a man and..."
  • "We all have to do things we don't like."
  • "You are not being a 'team player'."
As you can see, they are insidiously generic. They could apply to anything, and that's the point. By not being specific, they are therefore false arguments.

Here is how it is defined in Wikipedia:

A thought-terminating cliché is a commonly used phrase, sometimes passing as folk wisdom, used to quell cognitive dissonance. Though the phrase in and of itself may be valid in certain contexts, its application as a means of dismissing dissent or justifying fallacious logic is what makes it thought-terminating.

The thing to understand is that when you hear these phrases, it means your opponent is unwilling to hear your arguments or your logic. In effect your opponent is unwilling to change their position.

What can you do against this? Well, not much really. When this sort of phrase is tossed out, the opponent has shut down to any discourse. Odds are they will just start getting mad or shut down further.

You can try to continue. Go for the gold! The olympic answer is that the phrase they just uttered does not apply to the specific argument. Challenge them to prove them wrong by having them utter the phrase after you state that the Easter Bunny is real or that teapots circle the Sun. Maybe it will work, but it is hard to get mental traction when someone has cognitive dissonance so strong they are unwilling to discuss a subject logically.

The best you can do is call foul (or fowl if arguing about chickens). Point to this blog and let them read it. Help them understand that they may not really have a reason to believe what they do and using a thought-ender is proof. Without evidence otherwise, you are winning and very sorry it is only because they are giving up by using such a cliche'.

Will that work? Hard to tell. Some people are unwilling to acknowledge that they are wrong. This is very strong as a past president has proven. The mind can invent many beliefs even believe these cliches' are logical and support their beliefs. The facts are, the brain is very afraid of being wrong and is deathly afraid of the cost of new beliefs.

If you can be wrong once, can't you be wrong again? The brain rebels at giving an inch because it could lead to a nasty trend.

Why fear of new beliefs? Simply being wrong means you are not a good provider and the wrong end of the genetic gene pool. If this at work, even worse. One bad belief admitted might show even more poor thinking and thus a reason why that person should be fired. As you can see, loosing an argument is like loosing a fight with a lion, proof that in the battle of the survival of the fittest, they are not so fit.

When this happens in the workplace, you need to be careful. Loosing is really bad for many people. They may already be fearing for their jobs, whether it is a justified belief or not. Leaders in companies also hate to have any questions to their authority. Programmers too are very sensitive to being wrong.

At work, you might want to be careful and defuse the situation. People are afraid of being seen as less than they are. The key thing is that we are all human. Everyone makes mistakes. I like to say in many situations like this that in some cases their may have been no other choice. Chalk it up to unavoidable and that anyone would have had that belief.

Here is the list of cliches' from Wikipedia.

Non-political examples

  1. "That's a Good Thing."
  2. "Why? Because I said so." (bare assertion fallacy—also “I’m the parent, that’s why” appeal to authority).
  3. "That’s a no-brainer."[3][4][5]
  4. "When you get to be my age..." (as in “When you get to be my age you’ll find that’s not true.”)
  5. "You don’t always get what you want."
  6. "What goes around comes around."
  7. "The best defense is a good offense."
  8. "Everyone is entitled to their own opinion." (appeal to ridicule if said sarcastically)
  9. "It works in theory, but not in practice." (base rate fallacy)
  10. "There’s no silver bullet."
  11. "Stupid is as stupid does."
  12. "Easy come, easy go."
  13. "Life is unfair."
  14. "Such is life."
  15. "It is what it is."
  16. "It was his time."
  17. "Whatever."
  18. "Yawn."
  19. "Be a man and..."
  20. "Think about it."
  21. "Just forget it."
  22. "...so, you do the math."
  23. "We will have to agree to disagree."
  24. "We all have to do things we don't like."
  25. "You are not being a 'team player'." (ignoratio elenchi).
  26. "That's just wrong." or "You just don't do that."
  27. "It takes all kinds to make a world."
  28. "Just do it."
  29. "That's a cliche."
  30. "That's what s/he said."
  31. "Don't be that guy."
  32. "Just look at me now."
  33. "Touché!"
  34. "Better to have it and not need it, than need it and not have it."
  35. "Because that is our policy."
  36. "Don't be silly."
  37. "There's no smoke without fire." (used to convince others that a person is guilty based on accusation or hearsay and to discourage further examination of evidence)
  38. "Your mom."
  39. "But...anyways...."
  40. "I'm just sayin'"
  41. "C'est la guerre"
  42. "Amen!"
  43. "So it goes."

Political examples

Thought-terminating clichés are sometimes used during political discourse to enhance appeal or to shut down debate. In this setting, their usage can usually be classified as a logical fallacy.

  1. "Racist." (Ad hominem attack).
  2. "That’s just a (liberal/conservative/libertarian/communitarian/etc.) argument." (association fallacy).
  3. "Socialism or Barbarism!" (false dichotomy)
  4. "'Anarchist organisations', isn't that an oxymoron?" (equivocation)
  5. "If you are not with us, you are against us." (or its opposite, "Who is not against us is with us")(false dichotomy)
  6. "Love it or leave it." (false dichotomy)
  7. "Support our troops." (ignoratio elenchi).
  8. "...or the terrorists win." (false dichotomy).
  9. "If you're not outraged, you're not paying attention." (false dichotomy)
  10. "Better Dead than Red!" or its inverse "Better Red than Dead!"
  11. "That's a conspiracy theory."
  12. "Freedom is not free." (Bare assertion fallacy)
  13. "Live free or die."
  14. "Fascist arguments need no comments." (weasel words)
  15. "If we gave it to you, we'd have to give it to everyone."
  16. "Freedom is non-negotiable."
  17. "Especially in this economy."

Religious examples

Thought-terminating clichés are also present in religious discourse in order to define a clear border between good and evil, holiness and sacrilege, and other polar opposites.[citation needed] These are especially present in religious literature.

  1. "God has a plan and a purpose."
  2. "The Lord giveth, and the Lord taketh away." Job 1:21
  3. "Adam and Eve, not Adam and Steve!" (opposing same-sex marriage)
  4. "God works in mysterious ways."
  5. "Trust in the Lord with all thine heart; and lean not unto thine own understanding. " Proverbs 3:5
  6. "Forgive and forget."
  7. "That's not Biblical."
  8. "Jesus loves you." (ignoratio elenchi)

The religious or semi-religious ideas of cults, heretics, and infidels are also often used as thought-terminating clichés, e.g. "Do not listen to him, he is an infidel," (a guilt by association fallacy) or "That line of thought sounds like a cult" (also a guilt by association fallacy).

Just for fun, in case you have read this far, here is a lesson on cults.

Sunday, August 30, 2009

Truth in Dilbert (like we're surprised) - Diversionary tactics

I am happy to say that the following Dilbert cartoon is legitimately linked to the Dilbert.com site. I just noticed today that they allow embedding of the strip. Why am I happy? Because there are wonderful lessons to be had.

Today's Dilbert is something I see all the time. The joke here is management, but I see it from developers too. Basically imagining the worst possible outcome. Of course there is the opposite, imagining the best possible outcome.

The problem is critical thinking. Most people can't do it anymore. You need to examine critically the statements made to say it is bullshit or not. Of course it helps that the person making such statements isn't either a complete rube or has some other evil (or accidentally evil) intent.

Why would people say silly things that are either massively pessimistic, FUD but also mind blowing optimistic? Sadly it isn't because they are clueless, but they are just diversionary tactics. It is not that something is impossible, they just don't want to do it. It isn't that adopting a new language or technology is easy and spuper productive, but it is cool and would look good on a resume.

Remember the old saw: Don't assume malice when simple stupidity will do. Well, here is another one: If they really aren't that stupid, there is another reason why they 'sound' stupid. Sadly this 'sounding stupid' is pretty deep. To say something stupid, you have to commit to believing the statement. That means endless argument as facts will be ignored.

The human brain is a messy place.

On to the cartoon!

Dilbert.com

Friday, August 28, 2009

Check Your Blinders at the Door

Here are some bad words:

  • Tunnel Vision
  • Blinders
  • Arrogance
  • Single Mindedness
  • Happy Path
  • My Opinion
  • One Way
  • One Solution
  • One Method
  • Only

These are very bad words because they all lead to software failure. Simply they lead the developer into false beliefs that the code is complete and meets user needs.

  • Think like a user, not how you would use it!
  • Happy, Unhappy, Alternate Paths
  • What could go wrong?
  • Can the user make a mistake?
  • What about Undo/Redo, s the action symmetric?

Remember that the most critical bug is that the user will not use your software. If you cause extra work for the user, inconvenience them, waste their time, or treat them as stupid drones, you will not be considered a great programmer.


Sunday, August 2, 2009

Error Dialog? Think twice!

A lot of developers pop up dialog boxes to report errors. But why? The fact is that there are three types of error:

1) An error by the user that needs to be corrected.
2) An error unexpected by the developer and announced for debug purposes
3) An error that occurred, but is corrected. Often reported as part of a debugging the capability, and left in the code.

Warnings and errors are usually the same thing, but just a level of severity of an error. So assume we are talking about both.

The problem with most software is that the errors unpredicted by developers often are reported without any information to help isolate the error so that it can be corrected in code. We have all seen it, a confusing error message and usually diminishing functionality because the error makes the application unstable, corrupts data, and makes the application behave like it has gone insane.

The worst thing about the class 1 errors is that if you call support, the first thing they ask is, "What were you doing just before you saw the message?" What a worthless statement. Most people are not memorizing every step prior to an error. It is one of the world's most silliest questions.

I am sorry, but you need debug info. You can't interview customers. You need to build in the ability to report errors in a rich way to allow recreation and/or isolation of the issue.

Data to Report for Errors

What information? How about the log, the user's files, and here is a breath of fresh air: The undo/redo stack. The undo/redo stack is in many applications already. The only job is to serialize the data to allow a developer to understand what the user did 'before' and error.

The commands in the stack may not isolate the exact issue, but it can narrow down the activities. It should be required by all development organizations that the undo/redo stack be serialize-able and even repayable to recreate errors.

Rules for Error Dialogs

1) All errors (and warnings), should have a way for the user to report to developers. That includes errors made by developers. The fact is that even user errors can be a result of bad design. If you get a disproportionate reporting of user errors, your application probably needs a redesign.

2) No generic message dialogs! They just ask for trouble. You should have
a) A reference to the part of the code the error was generated from.
b) A short sentence to describe the error
c) Any related data, like the type and object instance identification where the error occurred.
d) Information on how to correct the error
e) If possible, undo the action that caused the error
f) Ability to report the error to the development team

3) Don't report status with dialogs if you can avoid it. The fact is that dialogs are at least an extra click for users to waste on dismissing a dialog.

4) Reporting an error that you have corrected is also a waste. Often such dialogs remain in code because the developer likes to see the result of hard work. Users don't care, so these dialogs must be forbidden. Even logging this sort of thing may be a waste of resources.

5) If you can, auto report errors back to developers. The simple fact is that developers will successfully ignore user problems unless you have the statistics to prove code is broken. Usually only users see errors as developers don't behave the same way and don't encounter such errors. It is therefore imperative that such data be reported to developers to close the loop. Do it electronically to avoid wasting everyone's time, money, and effort.

All Dialogs are Criminals Until Proven 100% Necessary

The simple rule of thumb is to avoid dialogs in many cases. This is especially true for errors, warnings, and informational dialogs.





Thursday, July 30, 2009

Build software from straws

Think about a building architect and a builder. The customer wants a ten story building. The architect design the building on paper with a pen. The drawing, to the builder looks like a lot of lines. You could use string, straws, or toothpicks to build the building.

Sadly, this is how many developers write software. They should use steel-like programming, but they substitute fragile code because it does the job.

Building a building out of straws will do the job, but it fails eventually.

Tuesday, July 28, 2009

UML in the Post XP World

I was catching up on UML this week. I'm a bit amazed by what I read. Most of it sounded like the short sighted and outright fear-mongering. A lot of FUD too. The good news is that there is less FUD and more people are seeing that Agile as it is practiced (because 10 to 1 your environment won't allow it).

Let's start with what I found in a Slashdot thread :

The way I use UML is as way to select projects I want to participate in. If it uses UML, I'm out. The correlation of using UML with rigid authoritarian organization and fighting with "productivity enhancers" rather than developing software is too high.

Yikes! What this tells me us that the guy has a fear of tools based on the failure of other tools. I've used crappy tools myself, but I don't let that paralyze me with fear. Funny he uses the word 'correlation' as if he actually has definitive data. If only the world was better at critical thinking.

Next was the response from one fellow in the thread that said this:

...I won't hire any developer who refuses to use UML since I'll assume that s/he is lacking in essential software engineering skills and is a "code first, understand the problem later" sort of person.

You can respect a developer for what tools and techniques they will or will not use. Some folks are hackers and its not necessarily compatible with employment with those of us that like to think and plan before doing.

But what caused one developer to hate UML and the other to love it? Do a quick search on "UML sucks" on google and you will see a lot of arguments for and against. My thought is that this is all about a combination of silver bullets and the pressure to succeed at work.

The silver bullet is of course the idea that UML contributes to better designs and, we would hope, reduce the development cycle. UML, by capturing a design's state, should help with both long term support and reuse too. It all sounds so good!

The truth is that UML can do all these things, but you have to work for it. Nothing is free. Back to that original comment. The author probably was less disappointed in the tools than the fact that there is no free lunch. Tools can only help, they cannot eliminate thinking through hard problems.

Silver bullet or not, you have to have a good gun to shoot it with and an excellent marksman. You also can't kill an elephant with a small caliber bullet.

UML does not create designs on its own, so you need experienced designers. Not all tools are equal to the task, for example, Visio can't help you generate code and some high end UML tools are frankly hard to use or let you do stupid things. More importantly, there is no one diagram that will represent a complex design, you have to create multiple at several layers of abstraction and modes (requirements, static, time/order,etc).

Another issue that is raising its ugly head are movements to UML 2.0 and beyond. I've been to meetings of the OMG to see the folks there work on the standard. It is as much about the base spec as it is to create profiles that solve specific problems in science and engineering. The trend seems to have the constant undercurrent of CASE (Computer Aided Software Engineering) and its new buzzword MDA (Model Driven Architecture).

MDA has a lot of base assumptions and some amazing leaps of faith. First though, let me say that MDA can and does work for very specific applications. On the other hand, MDA is not something the average developer will use and should stick to the world we know of design and code. MDA is modeling in its purest form and by definition, models are precariously difficult to represent in the real world. The successful MDA tools work with models that are easily transformed.

Why UML?

Why not? Why English? English is not perfect, but we seem to muddle by. The key is that UML has most of what we need to describe the important bits.

What about MDA, creating complex diagrams?

Well, here is a rabbit hole that a lot of folks go down, Model Driven Architecture (MDA) or not. It is as old as modeling itself. The issue often is that designers try to model 'all' of their design. The problem is that complexity does not always aid understanding. The ultimate MDA is to model to the point that you produce 'all' of your code.

'All' of your code? From diagram to code at a push of a button? MDA is not that mature - unless you listen to an MDA vendor :o) However, for a certain subset of your code this is perfectly logical. There are some things that are mundane enough to generate. The reality though is that in many cases, code is more appropriate than the complexity of diagrams required for MDA. It is also true that UML is not really a good medium for certain details that do read better in code than diagrams. The key however is to mix the two.

So what do you model? The key is that you model the coding activities. Primarily we want to capture use cases and sometimes activity and certainly class diagrams. We want to do a lot of the static structure in UML and more dynamic pieces in code. We also want to automate the POJO creation and properly document interactions. For complex activities, we want to create multiple views. The complex systems we should model the dynamic nature of behavior in the activity , state, sequence, and other diagrams.

But this is more than design leading to code. That would be fine, but it is how you get to code and ensuring it is good code. Visual diagraming can really add value to your process simply because it is easier to see certain aspects. I have reversed a lot of code into UML and you can imagine how many times I saw issues invisible to on-the-ground coders or the Agile leads drinking coffee by the burn barrel.

Here is one that puzzles me from another thread:

For me it's better to draw a class (sequence) diagram on a sheet of paper and (burn it :) explain the rest in conversations.

Burn? Ok, so the guy likes class and sequence diagrams, but burn it when you are done? Wow! Imagine you are designing a car or building, do you think those folks burn the design? Crazy, right? So why would this guy burn good hard work? Again this is all down do a developer not understanding the value of the work he has done in the long term and understanding the tools.

Is this to hide bad design? Job security? Some other deep psychological problem? Some kind of narcissism to give the illusion of control? I have heard this statement many times, so it is not just one person. Luckily it is a trend that is losing traction of late because it is fine for the developers, but seems to loose traction up hill from the businesses. That's good news. More people are buying tools and actually buying the training to use them.

The case for MDA is reducing the resistance to tools. MDA is not a silver bullet, but it does help. The second area is tracing the business requirements to execution. Quite simply businesses are more comfortable with what they can see. It slow helps that the process is easier for the developers at the end of the line to use tools to feed back up the chain.

Please understand that Agile is good in many respects. However it breeds another type of Agile that is sloppy, inaccurate, and usually requires rewriting of code (luckily the designs were already disposed of or never existed). It is one thing to have one week iterations and quite another to do so based on a structured plan with forward looking designs and tracking to your requirements. Perhaps we need a word for bad Agile? Cowboy-Agility?

Cowboy-Agility - The reduction or elimination of all possible development process to reduce the burden on the programmer to think no farther than writing the next line of code.

According to the Agile philosophy, one is supposed to stop parts of the process if they are not working. The problem is that in most organizations there is a blind spot that causes many processes to fail even if they are good. Agile didn't start with burning designs, the cowboys trying to ride roughshod to succeed did that. The problem is that success in the short term (or rather code for code's sake) is not good in the long term. Even saying we will 'refactor' the hacking later, misses the point.

Software design is still hard work. There are no silver bullets, but there are bullet molds and guns that follow the standards for those bullets.

Immutable Objects

An immutable object is an object, that once created, stays in the same state. Immutable objects improve thread safety, aid security, and avoid unauthorized changes to state.

Immutable objects are objects, that once created, stays in the same state. Immutable objects improve thread safety, aid security, and avoid unauthorized changes to state. There are many immutable types in Java: String, Integer, Float, etc.

Thread safety is assured for such objects as the only one thread can create these objects and the state of the object is static and unchangeable until the object is dereferenced (i.e. ready for garbage collection).

Please note that some make a mistake in believing something is immutable but has a reference to a changeable object. For example, the following class is not thread safe because the creator of Foo or any thread that calls getList() can add/remove contents of the list. Note also that the synchronized method is both useless and inappropriately reducing liveness.


@notthreadsafe
public class Foo{
private ArrayList list;
public Foo( ArrayList list){
this.list = list;
}
public synchronized ArrayList getList(){
return list;
}
}

Friday, July 24, 2009

Garbage in and Rotting

Remember garbage in, garbage out? With software, it is sometimes memory in and rot. Although memory leaks are prevalent, there is another class of memory problems that will cause us to consume a lot of the heap space and overwhelm the garbage collector mechanism.


One truth of computer science is that innovation usually causes new problems. With a Garbage Collector (GC), we save a lot of time not releasing memory by hand. There are new issues which can cause a different type of memory leak. Basically long lived temporary variables or large numbers of temporary variables that cause churn of the GC or long term memory use that could be much shorter than is required.


We generally understand and can spot big GC memory leaks, but the temporary ones and those that overwhelm the GC are harder to find. There is also a class of GC problem that can be almost as bad as a memory leak: Long-life temporary objects (LLTO).

An LLTO is something created, but not GC'd for a very long time. The issue usually arises in loops or recursive or deeply nested method calls. Because an object does not get dereferenced until it goes out of context or explicitly nulled, a longer life is given than is necessary. The problem gets worse as nested data is used deep in calls or there are long nested loops were the virtual machine and compiler really can't figure out exactly when objects are ready for a GC.

Loops can be long lived operations. As developers we don't think about memory management that often because Java does a good job, however, the time before memory sees GC can cause a problem. If there is not an explicite setting of a value to null, the program can not assume that the memory is ready for GC.


Our first example seems ok. but it has quite a few problems.



void main(Blob table[]){
Blob xyz = null;
for(int i = 0;i < table.length; i++){
xyz = table[i];
... do something with xyx
... do something else
}
}

In our rewrite, we just simply null the content of the array. This allows the object to be ready for GC as soon as we complete the operation. This also ensures that xyz is ready for GC before we continue to do other things in the loop.

void main(Blob table[]){
Blob xyz = null;
for(int i = 0;i < table.length; i++){
xyz = table[i];
table[i]= null;
... do something with xyx
xyz = null;
... do something else
}
}


Here is another example of poor GC-able code:



void main(){
BigZ z = new BigZ(true);
... things to do with z
boolean status z.getStatus();
space.foo(z);// Long operation
// z may now be GC'd
}
class Space{
void foo( BigZ z){
if (x.getStatus() = true){
... do long operation
}else{
... do a different long operation
}
}
}

The problem is that the BigZ data is not garbage collected until the foo method is called. In addition, if foo() were badly written and the reference to z was put into a local container, we could easily have a memory leak.



void main(){
BigZ z = new BigZ(true);
... things to do with z
boolean status z.getStatus();
z = null;// z is now ready for GC
space.foo(z);// Long operation
}
class Space{
void foo( boolean status){
if (status = true){
... do long operation
}else{
... do a different long operation
}
}
}

The result of the new code is that z is GC'd before the call, rather than some time after the operation. You might be tempted to null it in foo(), but there would still be a usage count on z until the call returns.
This example can not apply to all cases, but it can apply to many. The hard part is examining the usage of the contents of complex objects, especially when dealing with state that may not be related to a need to keep the object in memory for another task.


Best practices are :
1) Set objects to null when your context no longer needs the value.
2) Set array indexes to null when done with them.
3) If possible, use prune-able structures (trees) to further reduce memory as objects are no longer required.
4) Re use of instance variables is not enough, null them as soon as their values are no longer needed as loops and calls could delay the GC until it is reassigned.
5) Avoid allocating large hunks of memory as it is better to lock and iterate rather than clone to avoid corruptions. Balance memory use over multi-tasking.
6) Avoid creating containers just for the sake of making loops easier. If you have a complex structure, create a specific thread safe iterator.



Things to think about:

1) GC can be quicker if memory is not released in huge chunks
2) The sooner memory becomes a candidate, the sooner it can be scavenged in GC.
3) If there is too much memory to recover in one cycle, GC may cause heap to grow.
4) Reducing memory consumption with temporary objects will reduce heap growth.

Wednesday, July 1, 2009

Anthropomorphic Software

Software is seen as living, and thus responsible for its own bad behavior.

People talk to their dogs. Dogs know a few commands, but the average 2 year old is way smarter.

People talk to their cats... Enough said about that!


But programmers really talk a lot with their software.

"There is something wrong with the code." That is talking about code like a living thing.

What you should say is, "There is something wrong with the code I wrote."

This isn't just semantics or being a language nazi. Think about how you feel when you say each sentence. Think about good programmers and bad programmers. What type of sentence structure did they use?

The more you think of software as 'alive', the less you will look to the real issue: The programmer. The programmer is alive. The programmer is a human and full of faults, assumptions, and general sloppiness.

Don't sit there studying the manic depressive software bugs. Study people because that's why software looks manic depressive.

Java Exceptions and Cognitive Dissonance

Cognitive dissonance is expecting one thing that you hold as a deep belief and seeing something else that flies in the face of your belief and the uncomfortable feeling you get when it happens. Cognitive dissonance is a coping mechanism of the brain that you might imagine is meant to help us not believe in Santa, the Tooth Faerie, and the End of the World. At some point you see evidence and your fantasies go 'pop' and you are back to reality.

Sadly, belief is stronger than cognitive dissonance. It is why cults still exist. It is why deprogrammers are still in demand. It is why there is so much pseudoscience in the world.

As an example, many end of the world cults are so messed up when the world does not actually end, they go a little nutty. They just can't believe that they were wrong. Many just create a new fantasy and set a new date for the end of the world. Sure, a few wake up and leave the cult, but most hang on.

Programmers have the same problem. Take exceptions. Please, take them! They are great... except when caught and ignored. 99% of most crappy code is caused by developers not properly catching and 'handling' errors. They see something like exception handling and assume it is somehow rare or that because the catch does not force you to write code, there is no need to write the code.

The cognitive dissonance happens when the application becomes unstable and/or crashes. They may even log the exceptions, but for some reason, they can't understand why the code fails. They justify their decisions with fantasies like the exceptions being rare or impossible or better yet, un-handleable. They feel uncomfortable, but they just can't seem to come to terms with the idea that they failed to add error handling.

Like an end of the world cult, they just set another date when their application will work and wait for it to happen. Sadly the day never comes. They still believe in a fantasy, despite the evidence in front of their face.

What if we changed the keyword from 'catch' to 'handle'? Would the world would be a better place? Maybe that is the first thing you should say at a code review? Think of it as the first tool in your programmer deprogrammer toolkit!

Saturday, April 18, 2009

Logical fallacies and software

Logical fallacies are simply arguments that sound logical. The books you can find on the subject are usually aimed at winning arguments or rather to avoid loosing an argument to a purveyor of false arguments.

Software usually fails from logical fallacies... I can tell by your expression, you are intrigued, but not convinced. Poor logic kills software, but why say logical fallacy as if it is different?

Lets take a simple one. Look at two statements. 

  1. If I break an egg with a hammer, the egg was broken by a hammer.
  2. If the egg is broken, I broke it with a hammer.

Version 1 is a valid argument. Version 2 is not. Version 2 is actually influenced by version 1. We assume hammers can break eggs, so all eggs are broken by hammers. The fallacy is assuming that prior evidence leads to statements of truth in the future. In fact, it is accident that 1 lead to the assumption in 2. The second egg could be broken in various ways from simply dropping it, to an earthquake, a hatching chicken, to being run over by a steam roller or hit by an asteroid from space. 

Version 2 is only likely to be true if the egg lives in a hammer factory. But as you can see, just because you are in a hammer factory, not all eggs are nails.

Look at a real world example:

  1. If I try to break into a computer and fail, I will fail to enter the password three times.
  2. If the user fails their password three times, they are a hacker.
Again, the first statement leads to the false assumption in the second statement. Just because a hacker needs to guess a password, a real user might goof three times, but we assume they are a hacker.

The danger here is assuming that messing up a password is the action of a hacker. Is it? Odds are, so. Most people forget passwords, mistype passwords, or even use the password for another application by mistake. It is easy to test too.

Failure to enter a password is not proof that the user is a hacker. But many developers treat these poor people as villains. I'm fairly sure you are a victim? Have you ever had to go through hoops to reset an account that is locked down because you goofed your password?

This is just one fallacy. There quite a few that lead to issues with poor application logic and assumptions. Read up on logical fallacies. Be a better developer by knowing the difference between good logic and a fallacy.

Here are a few good books:








 

Friday, April 17, 2009

Use Your Brain

Your job is to use your brain. Not just to think. You need to learn, imagine, be creative, and think.

Put yourself in the user's shoes. Don't think of what you want, but what the user needs in specific situations. Use your imagination. Study the subject. What works? What does not? Think through usage scenarios to test your assumptions. What can we do that is new? If I have a set of ingredients. how can I mix them in new ways?

Today I talked with a high ranking officer in the US Airforce reserves. He has been on the job for 25 years. He praised me for understanding his problems and amazed at my creative and on target insights and solutions. I have many experiences like this. Why? Have I been in his shoes? Have I flown a plane? Have I been in battle? No, but when I have a customer, I study their domain. 

I put myself in the customer's mind and imagine being in their shoes.

The true definition of an architect, programmer, or an analyst is a knowledge worker, not a robot. Knowledge and imagination are your tools. You learn the domain, talk to people in the domain, and imagine you are the user in the domain. Think of problems, their solutions, and test the solutions on paper, then implement those solutions. 

If you blindly follow a spec, you get a blind implementation. The reality is that specifications, requirements and even designs are important, but only if you use your brain on the way to the implementation. 

User requirements are from the user point of view, not how you need to implement them. They are not everything you need to think about to create a successful solution. Specs and requirements are worthless pieces paper; By themselves at least. They hold very little value by themselves. Add your brain to make them real.

Do these things and you will be successful, well paid, and you will enjoy your job. 


Tuesday, March 24, 2009

Passion of the Programmer

Passion is an odd thing. You either have it or you don't.

Take this quiz:

  1. Do you love to write software?
  2. Do you love to learn new domains and professions as part of developing software?
  3. Are you passionate about learning new techniques to improve your success at development?
  4. Do problems drive you crazy until you solve them?

If you answered yes to all of these, you are a great programer. If not, think about changing professions. Don't become a manager, please!

All great programmers are dysfunctional

Yep, you heard me. There is no such thing as a programmer that is a normal person.

This is a very simple statement that is easily verified. If you do find an excellent programmer that seems to be otherwise normal, he could be a serial killer. Somewhere there is a flaw. Usually something that gets in the way of social IQ.

Not all programmers are totally incompetent at social interactions. I also don't mean to say that you lock them in a room with a slot for pizza. Just don't expect normalcy.

What if you do find a programmer that is socially outgoing, has their act together, calm, does not procrastinate, easily understood, a great communicator, blessed writer, and a wonderful speaker? It's really simple, this person will become a manager. This is true even if they love programming. 

Are programers crazy? Isn't everyone? We all have different bits of crazy, it is just a matter of how much and where. Really it is about abilities to process programming languages, a certain type of imagination, a willingness to torture their bodies by sitting in front of a computer and a few other things. 

But why do programers seem to be so far off the beaten path? I think this is a side affect of creation mixed with making a living. We love our creations like a jealous parent. Some are at another extreme and act like jealous vengeful gods, protecting their creations from all others. 

There is more to this defense of their work than just how we evolved to become programmers. Some of this is just sticky biology. Brains don't like doing something twice. Heck, brains don't do something once, if they don't need too. Our brains have the assumption that we are right all the time, not because we are, but changing our minds takes effort. With software this also means rewriting. 

If you are employed, it gets worse. Rewriting is a black mark against your skills. Rewriting causes deadlines to be missed too. Performance and efficiency is why we are paid. It is how we feed, cloth, and pay for shelter. Complain against another guy's code and you are threatening their livelihood. 

Open Source is a bit different. But not too much. It is the same game, just different repercussions. Most act as if their jobs depend on their code being right just like a real job.

Well, that's enough for now. Back to learning JRuby. 

Wednesday, March 18, 2009

Houdini and Software

Houdini was an escape artist and a magician. How does that relate to software? Quite simply it is all about solving impossible problems or making hard problems seem magically simple.

Magic is about hiding a trick to see a grander illusion. Escape is about over comming the physically impossible. In software we need to have the same techniques.

Without the magic, we have unusable tools full of complexity. Often there is too much data. Without the escape artist attitude, we leave problems unsolved or force users to repeatedly solve their common issues.

Magic and escape are related in many ways, but the escapes are not just tricks. That's a fact that many miss about Houdini's genius. There are those that do tricks to fake escapes. Nothing wrong with that, but there are many escape artists like Houdini that found flaws in the shackles and devices he needed to escape.

Houdini didn't just pick locks. He understood locks. He knew what it would take to get around their purpose. He knew the anti-pattern of locks. Once you really understand locks, most are breakable as most problems are.

Look at safes, where being locked in one is easier to escape than trying to break into one. Houdini understood that there were no countermeasures inside a safe that couldn't be bypassed with a simple screwdriver. It only seems impossible if we think of the safe only with a single mind.

For magic, sometimes the creation of the illusion is hard. Sometimes it is simple. But the key fact is, there is some type of skill or mechanism to add the true magic to create the illusion. If you don't add these, and a little showmanship and misdirection, you get nothing special. You loose the magic.

As developers, we need to apply both a key understanding of our problems and their soloutions. We need to know how to make a problem that seemed impossible with another method.

We don't need to make the magic of an elephant disappearing to be part of the magician's code, never to be revealed. However we do need to hide the complexity of the feat from the end user.

We need to hide the trick of getting from A to B. We also need to escape from seemingly unsolvable problems through means less obvious than a metal key. As developers, analysts, architects, and even QA, we need to think like magicians and escape artists.

Using software should be magical. It should seem like we are over coming the impossible. The only way to do that is to get into the mind of great men like Houdini.

Saturday, March 7, 2009

Hi boys and girls, time to learn computer science!

I hate to say this, but this is not a site for little kids. Computer science is a weird profession. It is not a noble as firefighting or important as being a doctor. Computer science attracts very odd people and worse, people that should never have joined the profession. 

Computer science is also not a science. I have never seen it proved that on a day to day basis that the scientific method is applicable to most software. Yes, you can prove certain algorithms and even use software to do other genres of science like astronomy or physics. But the creation of websites, accounting systems, and even control systems for rockets and washing machines is not a science. 

So, what is this blog/book about? The focus on the psychology of software development. It is also meant to make you laugh. Laughing about computer science is pretty easy. Their are so many mistakes it is like the Keystone Cops. Rockets to Mars miss because of decimal points. Whole stock markets crash because we let the computer run amuck. We also torture the poor users with horrible designs. There is a lot to giggle about here. 

What I want to accomplish is the education of managers and developers about the little hobgoblins of the mind that cause software to be so incredibly bad. We want to ask important questions here and solve some (we hope) with the light of examination and a little psychology. Here are just a few we will cover:

  • How do developers create those bad interfaces? 
  • Why doesn't the software do what I want? 
  • Why does my computer crash?
  • Why do developers seem to be insane?
  • What methods can we use to create great software?
  • Why doesn't the latest language or software process fix these problems?
It won't be easy. In fact for some developers this will seem outright insulting at first. For managers it may be comforting, but you may still be at the mercy of your developers. No promises because change, especially of long term habits, is slow.

So, stay tuned. We have a great deal to talk about. Send me descriptions of your nightmares or even your successes if by a miracle you had one or two. We are all in this together.

If you are still a kid. Go ahead and read this blog. Maybe it will help you avoid this profession.