Tuesday, November 09, 2010

Regions in C# code

Some people apparently like to put #region directives in their code, denoting which code is for constructors, which is implementations of a certain interface, etc.

I'm agin 'em.

Regions are more trouble than they're worth. True, they are useful for hiding things... but why would code need to be hidden? If implementation details are messy and hideable, extract a class to hide them. If the methods are so fractured and messy that regions are going to make a difference, then, well, regions won't make enough of a difference--it needs refactoring.

I do endorse regions around the using directives and the legal boilerplate at the top of a file, so that VS can open the file with those areas in a collapsed state. These are areas outside the type and namespace, though. Everything else in the code is presumably something I want to see, or else why would I have the file open? Don't make me do extra clicks to see what I came for.

I could consider tolerating regions as a matter of differing tastes, if people would keep them up to date. But as methods and properties get added to an interface, people write new implementations in different parts of the file from the originals. Or, common code gets extracted into helper methods. Etc. Pretty soon, the regions don't even do the one thing they're supposed to do: collect the code into areas of similarity.

Suppose some people do maintain their regions pristinely. When developers are conscientious enough to keep regions up to date, they're usually also careful enough to be writing code that doesn't need regions to clarify it. They're following the Single Responsiblity Principle, and using tests, and in general refactoring it before it reaches spaghettihood. The demographic who would be helped by regions--people who write sloppy code but are conscientious about cleaning up after themselves and are good at following rules laid down in a team's coding conventions--basically doesn't exist.

It reminds me of all the people who bought PDA's (back when there was such a category, ten years before smartphones) thinking, "Aha! Now that I have an organizer gadget, I'll get organized." It didn't happen for them. The people who were disciplined enough to keep things in their PDA's and pay attention to it, were already discplined enough to have been keeping things in their day planners. Or on pieces of damn paper. Nobody ever got a gadget to do what their personality couldn't.

In my experience, regions are all friction, no grease.

Wednesday, October 06, 2010

Soothsaying based on spellchecker vocabularies

Outlook 2007 did not recognize "codemonkey". I thought this was bad news for Microsoft until my Android phone didn't recognize it either. I'm not in a position to check an Apple product just now. Nonetheless, I'm going to go on record (such as it is, in the blogosphere): whichever platform recognizes this word first will also dominate the mobile OS wars.

Thursday, September 23, 2010

Planning of costs that the customer can't see

A friend asked,

We organize our work around stories, that are supposed to present a user requirement, and tech spikes, that we use to figure out how to do something. How do you get architecture/infrastructure work into a sprint? I mean things like "re-implement X using our nifty new architecture" or "set up a new server for QA"?

You know the joke that goes, "The first thing you need to know about recursion, is recursion"? The first thing you need to know about clear agile answers is that agile has no clear answers. Every answer is, "It depends." The rap is that you're making up as you go along, which is not really a fair summary; but it's also true in some ways and is the source of much of its value.

For me, what makes an agile solution agile, is rapid increments with honest re-evaluation, plus as much empirical evidence as can be gathered at low cost. It's the equivalent of numerical analysis through successive approximation.

Okay, I'm two paragraphs in and still haven't addressed the specifics. I've seen architectural overhead/engineering costs addressed several ways. AFAIK there's no general consensus. I don't have any personal preference.

1. Don't measure overhead at all. The point of doing stories with some sort of scoring system, is that you eventually get an approximate team velocity, which helps you predict what you can commit to and when you're behind. The overhead costs of refactoring, deployment, vacation, research, training, conferences, organizational reports/meetings, nosepicking, etc., will come out in the wash. This is why velocity is calculated on actual points delivered; the metric doesn't really care what impedes the delivery. It makes you very empirical, very quickly. The don't-measure approach works well when there's a fairly random distribution of overhead. It also has an enviably low cost of bookkeeping. The downside is, if things get bursty and you flop an iteration due to unusual overhead, there's not much paper trail for management. Usually depends on how much trust and track record you have.

2. Budget X items per iteration for overhead, where X is capped. This is sort of a service-level agreement approach. It acknowledges that devs never get as much time as they wish they could have to do things right. But, like an SLA, it also keeps them from starving. It won't work well if the dev's can't contain themselves, i.e, they "estimate" Y hours but invariably do every task to completion, regardless of budget. Also, there's a risk that the time taken to argue about what should be at the top of the dev list will eat up a significant portion of the budget. (Agile's not too hot, IMO, at dealing with lack of consensus, or any other issue that requires actual personnel management skills. It's an engineering practice that has a few social dimensions, but it doesn't really help you figure out what to do when teams are dysfunctional. Its main contribution on that front is that it will yield empirical evidence, fairly rapidly, that they ARE dysfunctional.) (It also doesn't dust your cubicles or clean little bits of food out of your keyboard. In other words, hard problems remain hard, and grunty problems remain grunty.)

3. Attach dev costs to customer-visible tasks. This doesn't always have sensible semantics. Deferred dev costs can get suddenly bursty and have their own crosscutting urgency ("the server has 4GB of disk left; avg response time has already increased by 60X and in 18 hours it will crash dead dead dead"). I'm told it works okay if you don't have deferred maintenance--but I've never worked on a project that didn't.

4. Let dev tasks compete straight up in the planning game (or whatever budgeting scheme you use) against customer-visible tasks. Maximal visibility, at the cost of added complexity to planning. Many non-dev stakeholders will just wish the dev costs would go away, and wonder (loudly) why devs don't just work dev magic and get everything instantly right. It's the most honest, empirical, manageable way to go, but it's vulnerable to a number of not-too-rational, impulsive responses and political infighting that developer-types are not usually good at winning.

Okay, I said I didn't have a preference, but the most sensible approach I've actually been a part of was a hybrid of 2 and 4. This was actually a Lean or Kanban kind of agile, where we organized work around queues that pulled tasks forward, from planning to work-in-progress to demo to deployed-and-done. Most of the dev effort was devoted to the mainline queue, but there were two auxiliary queues: one for pure dev tasks, and another for customer-support issues that we couldn't itemize at the beginning of the iteration, but based on experience, we knew were likely to crop up at some point mid-cycle. The overall experience of that scheme was not especially better than any other, but it did deal rather sensibly with overhead. Dev overhead and emergent customer-support tasks had enough visibility in the metrics that management could see how they impacted velocity. And, the advocates for any given task--both devs and the poor bastards who had customers waiting for them to return their calls--had some visibility into how much work had already happened per iteration on their queues, plus ready access to they pending queue so that they could quickly prioritize new items relative to old. (We kept tasks on 3x5 cards, in physical queues on a wall full of magnetic whiteboards.) The easy visibility had a rather natural self-governing effect toward the end of any iteration: people have a hard time standing up in the open, asking for the 13th special favor of the month.

Friday, July 02, 2010

ViewModel in .NET

[Changed very little from a sizeable post to PADNUG today:]

For observable objects like view models, I usually have a base class
that provides INotifyPropertyChanged and IDisposable. Change-tracked
properties are held in a PropertyBag, which is a souped-up Dictionary
that maps properties to backing values. It detects changes, so can invoke PropertyChanged intelligently.

Indeed, if you want to get elaborate, and are comfortable with generics, there's quite a bit that the base class can do for you. A nice pattern is

abstract class ViewModel<TViewModel> where TDerived : ViewModel<TViewModel>
{}

class ConcreteViewModel: ViewModel<ConcreteViewModel>
{}


This lets your base ViewModel class know the type of its subclass. Once it has that, it can do slick things with lambdas, like

protected TProperty Get<TProperty>(Expression<Func<TViewModel, TProperty>> propertyLambda)
{
return mPropertyBag.Get(propertyLambda);
}


and

// the return value is true iff the property changed
protected bool Set<TProperty>(Expression<Func<TViewModel, TProperty>> propertyLambda, TProperty value)
{
return mPropertyBag.Set(propertyLambda, value);
}


That's quite a mouthful, but when you use it, the generic parameters become implicit, so the syntax is just:

public IValueConverter EventDateConverter
{
get { return Get(x => x.EventDateConverter); }
private set { Set(x => x.EventDateConverter, value); }
}


It's not quite as simple as automatic properties, but it's the same number of lines; you don't have to declare a backing variable for each property. And, you get change tracking, lazy loading, and INotifyPropertyChanged. It's (reasonably) safe from problems if you rename your properties, because the lambda are compile-time checked. I say "reasonably" safe b/c you can still screw it up if you do something like:

public IValueConverter QuantityConverter
{
get { return Get(x => x.EventDateConverter); } // oops, now the lambda points to the wrong property
private set { Set(x => x.EventDateConverter, value); }
}


There's a dozen little tweaks you can do. For example, I've overloaded the Set() to accept a MethodBase, which allows the syntax

public IValueConverter QuantityConverter
{
get { return Get(x => x.EventDateConverter); }
private set { Set(MethodBase.GetCurrentMethod(), value); }
}


In other words, you never get copy and paste errors from the setter. 50% reduction in risk surface area.

Other things:
* Make PropertyBag smart enough to notice when it's got an IList<> or IEnumerable<>; in that case, the default values are empty collections, not null ones. For assemblies involve in UI, I've got an ObservablePropertyBag specialization that knows to initialize ObservableCollections, too.
* Give the base class two generic parameters: one for the concrete subclass, and one for an interface that the subclass implements. If you define the lambda expressions for Get() and Set() in terms of the interface, the lambdas in your properties won't compile until you put the properties on the interface. In other words, the compiler will force you to keep your interfaces up to date. It's not for everyone, but I figure if I'm going to work in a statically typed language (haven't moved to C#4 yet), I might as well make the compiler work for me.
* I don't show an example here, but since Set() returns a boolean indicating whether the value changed, you can call it as an argument to an if statement and do additional processing. Note that by the time Set() returns, the INotifyPropertyChanged event has been raised. If you want to do pre-processing before an event, you can register the pre-processor during construction. (That's kinda hacky; AFAIK, there's no contract that the framework will maintain the first-registered, first-notified order on event subscribers. But it's been true so far.)

Using generics this heavily isn't for everyone. My actual ViewModel base looks like

public abstract class ViewModelBase<TInstance, TViewModel, TView> : ViewModelBase, IViewModel<TViewModel, TView>
where TInstance : ViewModelBase<TInstance, TViewModel, TView>, TViewModel
where TViewModel : class, IViewModel<TViewModel, TView>
where TView : class, IView<TViewModel>


before you get to the opening bracket!

In sum, this can be a moderately deep topic if you want it to. I didn't even touch on keeping the PropertyBag threadsafe...

Patenting business methods: Bilski v. Kappos

Just started following Stephen Forte when I ran into his take on the Supreme Court decision re business method patents.

> I could go and patent my implementation of Scrum since it is a business process and then turn around an sue all of you since I think you are using it.

This is a common misperception. There are some important hurdles to clear before you can patent something; they keep 99% (though clearly not all) of the frivolous stuff out. You can't hold a valid patent on something you didn't invent yourself (or in collaboration with all named inventors), and you have to conceive of it (and then claim it) before others had already begun practicing your invention.  In fact, once you've been practicing your invention in public for a year, even YOU can't patent it.  That's the US; in many countries, the moment you go public, you've lost your chance to patent.

Thus, even if you manage to find a Scrum style you can claim as yours, and patent it, and go to court, if the party you're suing demonstrates that they were already practicing your method before your "claim date", then not only do you lose the suit, but your patent gets invalidated.  An invalid patent is worthless.  You can't collect license fees on it, and you can't sue anyone else ever again with it.

So, there are some reasonable limits to the system, and that's one of my favorites.  Every time you take your patent to court, there's a risk you might blow it up.

> [Bilski] did not invent anything, just a creative way to hedge commodities.

Well, the case turned on the fact that he did not invent any thing.  Methods of doing things are entirely patentable, if they have a tangible output.  The problem with a business method is that its components and output are too "meta": it's a way of organizing and running an abstraction (which is basically what a business, certainly a corporation, is).  Bilski apparently had a new, useful, and original method, since all of those are required to obtain a patent; but there was no physical component, so it was not patentable. 

Patent law is historically all about tangible products, or methods of doing things to tangible products.  Software was considered unpatentable until someone hit on the idea of claiming them as instructions coded for a tangible machine to perform.  In other words, you don't claim the algorithm; you claim the execution of the algorithm by a machine.  Business methods are still trying to find their breakthrough angle into the realm of the patentable.  The Bilski decision was a setback, and a moment of sanity, but it hasn't really settled much.

Tuesday, April 13, 2010

My Biased Coin: What's the Co-Author Line?

My Biased Coin: What's the Co-Author Line?

An excellent post and excellent comments on Michael Mitzenmacher's blog. I replied and found I'd written so much I might as well blog. By now the topic is whether the identification of a "good" problem is a material contribution to its solution--originally, does a person deserve authorial credit for framing (but not providing) a solution? I'm riffing onto the notion of what a good problem is.

If you take the view that generating a good problem is a non-trivial contribution then isn't what happened to Michael akin to plagiarism?


My wife did her graduate work in English literature at a school with a strong engineering program. She noticed a certain type of student every semester: undergrads, usually engineering or pre-med, who had registered for her literature course only to satisfy distributional requirements and were very smart, very diligent--but could not manage "interesting" ideas, no matter how hard they tried. She found this heartbreaking--especially the ones who thought that med schools would sneer at a B+. Kids would come to office hours and ask, sometimes in tears, what they can do to turn a logical, systematic, earnest, but dry B+ paper into an A. She'd explain that by the standards of her profession--literary criticism--that it's not enough to be thorough or precise or clear or even persuasive. If you can't pick a topic that a reader would find interesting, you haven't done all your work. In some sense, if criticism isn't interesting, if only to other critics, then it's not valuable.

Worse, a sense of the interesting is the last thing to fall into place. She could work with them on clarity, on use of evidence, on logical structure and rhetorical devices... and there were many kids who, over the course of a semester, could make great leaps forward in those areas. But the judgment of what a finding "meant", of how it contributed to the field, or how one idea seemed more central or intriguing or revealing than another, was too much. She suspected it was because they were excellent at grasping and re-creating rule-based outcomes, but she couldn't give a rule of thumb for what made something interesting.

Myself, I am conflicted about how to value interestingness. I have sympathy for the argument that it's fuzzy, subjective, and sometimes just a codeword for "conventional" or "mainstream" or, relatedly, in the mainstream of the latest fads and trends within the field. At the same time, I'm also moved by the argument that a profession, by definition, has standards, and that for the typical case there's a wisdom of crowds. If you show an idea to 10 researchers and none of them is intrigued, isn't there something we can conclude?

Put another way, as an empiricist, how can you possibly value interestingess? And as a curator of an intellectual discipline, how can you not?

Tuesday, March 09, 2010

Pure WPF

This was prompted by a discussion on Padnug.

"Pure" is not a word I'd use often with regard to WPF. WPF rethinks many things, and not all of them have come into flower yet. Which is an optimist's way of saying, parts are missing today. To me, some of its big-picture rethinking has been biased toward massive empowerment of the developer, at the expense of off-the-shelf functionality. DIY over prefab. (I'm thinking especially of its debut without a DataGrid. In WPF you can write dazzling things, and when you know what you're doing the dazzle is just as easy or easier than WinForms... but you have to write them. The OOBE excitement is pretty sparse.) It's also been slow to build a comfort level in the dev community. To some, MVVM goes hand in hand with WPF, but I see senior developers still wary of MVVM or puzzled by it, and there are big areas where reasonable people disagree what it requires. [For example, the eventing between View and VM can be done in several ways, including at least two distinct patterns--classic .NET events vs routed events. Reactive.net (on its way to WPF) reworks the async pattern. Some people abstract the views in ways that the VM can manipulate; others are horrified by that. One's ideas about that are sometimes influence by whether you're using DI or not. In short, there's a fair amount of flux.] Junior devs, or old guard devs who aren't especially current, are mystified if not scared. Plus, of the total .NET workforce, there's more people on the web side than the desktop, I think--so lots of people who've stayed on the cutting edge of .NET as much as they've needed to, haven't touched WPF yet.

That's a long way of saying that to my mind, WPF feels too diffuse to be pure.

Tuesday, March 02, 2010

ViewModel rationale

Nikil Kothari had a great couple posts this week, notably this one.

He makes a good case for ViewModel, especially the idea that it's the next step up from codebehind. But there's more to be said. First off, UI is expensive, and not just mildly--it's basically as expensive as a software feature can be without being prohibitive (e.g., voice recognition). In my experience, whether on WinForms, HTML, Ajaxy and CSSed HTML, WPF, WebForms, or Silverlight--even my little tastes of Ruby on Rails--UI just takes developers longer to get a feature done-done. Often 4 or 5 times longer. There are some inherent and perhaps intractable reasons for this: UI is often what product owners feel most acutely, so there's lots of micromanaging, fiddling, and rework; the human eye is trained by evolution to absorb visual information quickly, with an especial emphasis on inconsistencies, so the tolerance for error is different; and everything in a UI tends to be intricately and sensitively connected to its neighbors, in time and in space, resulting in the wonderfully correct and illustrative metaphor of "the ripple effect".

There are also some regrettable reasons why UI is so hard: most notably that even the best tools, theories, and mindshare are still in their infancy. Many concepts that are getting some traction these days ("MVVM", "UX design") had effectively zero presence in commercial software even 15 years ago.

Since UI is expensive, a major trend among MV* patterns (MVC, MVP, MVVM, and other flavors of SupervisingController) is to make the UI dev burden as light as possible, and to provide more and framework/scaffolding that makes much of the plumbing just happen. The eventing and data binding in WPF and SL are important steps forward; so are codegen and reflection, which assist metaprogramming.

It's significant that ViewModel mindshare didn't really take off until the codebehind pattern matured a bit and there was framework support (primarily .NET) to knit the view's document-orientedness to its logic and state, as coded in... um, well, code.

I think the rise of CSS also played a crucial role, convincing millions of developers of the wisdom of separating visual markup from the containment tree--preferably in a reusable way that makes application-wide look-and-feel not only achievable but flexible and maintainable. CSS wasn't a step toward MVVM, but it cleared away some of the underbrush.

JavaScript is another factor that might have scared some devs straight, spending too much time repairing "clever" code integrated into a page.

So, some more drivers toward MVVM:
* stateless view --> one set of concerns (represention of state) out of the expensive UI
* passive view --> another set of concerns (choosing the next state, and transitioning to it) out of the expensive UI
* styles --> a third set of expensive concerns mitigated, at least partially. And MVVM handles most major tasks well, EXCEPT for styles, so it needed that task taken off the board
* scar tissue from browser scripting
* declarative code vs imperative/programmatic code, coexisting harmoniously via codebehind
* "conventions", base classes, and other forms of code leveraging/reuse that allow recurring problems to have recurring solutions, with minimal additional developer effort

Wednesday, January 27, 2010

Raising .NET events

This is a recycling of something I posted to Fabulous Adventures in Coding, in conversation with Pavel Minaev. Necessary background: the official .NET pattern for standard events (as opposed to, say, WPF routed events) recommends invoking them with a method along the lines of:
protected virtual OnFoo(FooEventArgs e) {...}

I thought of a few reasons for the "protected virtual OnFoo(FooArgs)" pattern. I can't come up with an aha moment, though.

There should be some way for a derived type to invoke its parent's event. That supports "protected" and the args.

There shouldn't be a way for nonderived types to invoke an event. That rules out "protected internal".

There shouldn't be a way for nonderived types to inspect the underlying delegate chain. Subscribers can put delegates to nonpublic methods in there; there's an expectation of privacy. That underscores why the backing delegate is private.

This is almost more of a mechanical problem than a logical one, but: there's a syntax problem in C#, at least--a logjam around accessibility. An event already has a modifier that is shorthand for its Add and Remove methods. How would we express accessibility of the event raiser, distinct from the accessibility of add/remove?

That covers everything but the reaons for "virtual". Leo mentioned the template method pattern. I think I'd call this a hook more than template--I think template implies both virtual and nonvirtual steps, where the base class performs the invariant steps nonvirtually. In this case, there is no invariant behavior, is there? Derived classes can suppress the actual raising of the event altogether. But the larger point stands: derived classes can decorate the base behavior.

If a derived class provides its own Add/Remove, it has to invoke its own local event; the base event is private. So there's an in-for-a-dime-in-for-a-dollar reason.

By making OnFoo virtual, you allow derived types to do a bit of covariance. Derived classes can read all the data from the FooArgs object and invoke an event that passes a SubFooArgs instead. It's cowboyish but possible. (I'm not coming up with a compelling example why you'd WANT to do this, but...)

Maybe this is a more realistic motivation: with a virtual OnFoo, you can precede (or simply replace) the base classes's event with a cancellable event of your own.

Okay. That's plenty of offtopic speculation from the likes of me. I'd still like to know if there's an aha-moment explanation, though. This is beginning to feel like an interview question...