Saturday, August 8, 2009

Epiphany

For the past few days, I've switched from Firefox to Epiphany as my main web browser.

Epiphany is a Gecko-based* browser with an interface designed according to the GNOME Human Interface Guidelines. It has the basic features you expect from a web browser: bookmarks, tabbed browsing, search via the location bar. Essentially, Epiphany is a stripped-down version of Firefox. It supports the Netscape plugin architecture, so any plugins you installed via Firefox will just work. And since it's Gecko under the hood, websites should look identical to those in Firefox.

Out of the box, there is some strangeness with Epiphany. For example, the default screen layout has two toolbars, one of which has only the location bar, the other featuring the back/forward/stop/bookmark buttons. But you can right click on any active toolbar button and choose "Customize toolbars", then start dragging the buttons into a more auspicious arrangement.

Also, GNOME uses Ctrl+PageUp and Ctrl+PageDown by default to navigate tabbed interfaces. This is inherently evil because it requires use of the right hand, which, when using a browser, should normally be positioned on the mouse. Normally I'd remap the key bindings (in GNOME, you can go to System -> Preferences -> Appearance and check the "editable menu shortcut keys" entry under the Interface tab), but GNOME won't let you rebind anything to Ctrl+Tab, since that's used to tab out of a multiline textbox. Fortunately, there is a plugin that ships with Epiphany that allows you to use Ctrl+Tab to navigate between tabs.

Javascript alerts are problematic -- they are window-specific rather than tab-specific, and you can't even close the window when one is active. This is an issue that Opera solved, and Chrome uses that solution; solving it in a way that fits in with GNOME will require some thought.

Favicons are another issue. By default, Epiphany only loads favicons for websites that use the <link rel="icon"> syntax, but most websites simply include a /favicon.ico file in the root directory. (GNOME's one of the few sites that use the former method.) There's a plugin to fix this, but the version included in Ubuntu does not work.

The final issue is unsolved via plugins, as far as I can tell. Your browsing session is not restored when you close the application normally. If the application closes due to a SIGKILL or SIGTERM or a crash, it will offer to reload your tabs the next time. Otherwise, it'll start a new session every time.

What are the benefits, if I'm willing to deal with the awkwardness?


  • Desktop integration. You specify your preferred applications in GNOME, and Epiphany will open downloaded documents with those.

  • Desktop integration. Epiphany looks exactly like I expect an application to look. Proper font sizes, for example (Firefox menu fonts are about 60% too large). And using GNOME's standard alert balloons rather than a hacked-up, application-specific toast alert that renders choppily.

  • Desktop integration. Epiphany doesn't have its own "view page source" function; it dumps a copy of the page in /tmp and opens your preferred graphical text editor on it.

  • The awesome bar looks nicer. This is probably the result of using proper font sizes.

  • Speed. Firefox is slow at starting. Epiphany is much faster.

  • Smarter behavior when closing tabs. Firefox is inconsistent with this; sometimes it'll return me to the last viewed tab, whereas others, it will choose to select the next adjacent tab. Epiphany always returns me to the last viewed tab.



All considered, Epiphany shows a lot of promise. If you normally use GNOME, you should try it out.

* With GNOME 2.28, Epiphany will use Webkit, but as of this writing, the latest released version, 2.26, uses Gecko.

Friday, July 24, 2009

Build Your Own CAB: Introduction

Jeremy Miller was working on a "Build Your Own CAB" series a while back. However, he does not have an intimate familiarity with CAB. I have spent 2.5 years working on a CAB-based application, with some side projects that use Castle Windsor.

Currently, I'm starting on a GUI project using Mono on Linux. It's against the CAB license agreement to use it on Linux, so I won't be using it. Instead, I'll be taking a look at the components of CAB, how to reimplement them or what replacements are readily available, the merits and deficiencies of CAB's implementation, and ways to avoid potential problems.

CAB Components


CAB consists of several components which could in theory be used separately. In practice, CAB's internal design is too scary to contemplate swapping out components, so the best you could hope for is to rip out part to use elsewhere.

Service locator


Technically, it's dependency injection, but I generally expect a dependency injection system to figure out the correct order in which to build things.

CAB's dependency injection system encourages you to have god classes that add a few hundred services to the workitem. (A WorkItem is like a Windsor container; it holds a collection of services and objects built, and handles events.) This is very procedural and does not involve actual inversion of control, as such; but it looks enough like using a full-fledged IoC container that it's reasonable to switch.

Events


CAB events are wonderful. Somewhat. They closely match Ayende's EventBroker, except for being based around EventHandler rather than Action (that is, Ayende decided his events shouldn't ever have parameters, whereas CAB decided on [object, EventArgs]).

There are some advantages to the CAB system, but there are a couple of maintenance issues as well, and a few handy features that it entirely lacks support for. On the whole, CAB events are great.

Commands


Commands are events that can be turned on or off. Not much to say about them other than that.

Items


Items are, well, any state you want to keep around. It's a place to build transient objects or store items temporarily.

I suggest only using it for building transient objects. You don't want arbitrary classes pulling stuff directly out of the workitem, since that's unnecessary coupling, so you may as well use a service with local variables instead.

Modules


A Module is just a DLL with some bootstrapping code. A CAB application will contain an XML catalog of modules that will be loaded and run on startup.

CAB Organization


In CAB, everything is part of a WorkItem. The intent is that you have a root WorkItem that represents the essential, system-wide state of your application, and you create child WorkItems on a per-task basis.

In essence, CAB uses WorkItems as a lifecycle mechanism for components. In CAB, configuration is tied to initialization. This means that creating a child WorkItem requires you to call the necessary configuration code when creating the WorkItem. This code can be relatively verbose, and this leads to the impression that CAB is slow. (Additionally, since CAB uses reflection extensively and, due to the marriage of configuration and initialization, unable to cache anything, CAB is slow. However, your IoC container will almost never be a significant bottleneck.)

In practice, child WorkItems will be used rarely. Only if you know you'll have a whole slew of items that you need to create now and destroy later -- for instance, creating a new and complex window that shares little functionality with the rest of your application-- will you resort to child workitems.

Conclusion


I've talked a bit about what CAB is. Next time, I'll discuss how a CAB application is structured, with special reference to the common pain points.

Monday, July 13, 2009

On window managers and Windows 7

I hate Windows.

There are two major reasons for this. First is the lack of a reasonable text shell like bash. Second is the window manager.

The State of the Art


There are a lot of great features that every window manager on Linux has: edge detection (provide resistance to positioning windows partially off-screen or overlapping other windows), virtual desktops (swap out all your windows for another set of windows), vertical maximize/horizontal maximize, alt-drag, always on top...

Windows XP was early enough that these features were not necessarily compulsory. Windows doesn't need to be a technology leader. Vista should have included them. Windows 7 still doesn't.

Vista/W7 tried to include a couple new features for the window manager. Edge detection? No. Virtual desktops? No. Always on top? No. Alt drag? No.

There's vertical maximize...sort of. If you drag a window to the left or right side of the screen, it's resized to take up that half of the screen.

There are two other notable features in W7's window manager.

Mouse Gestures


Make that mouse gesture. There's only one. If you "shake" a window, all other windows are minimized.

Why? When would you want this? If you only want the one window, why not maximize it? If you had virtual desktops, you could send the window to an empty desktop, or even make a new desktop and send it to that.

Perhaps someone heard about virtual desktops and misunderstood the feature entirely. They ended up with something absolutely useless.

Alt-Tab Outlining


When you're waiting in the alt-tab menu, the window you're currently waiting on will be outlined. For example, if you have Firefox maximized, and a terminal in the background in the upper left and another in the lower right, you can cycle through the alt-tab menu, and it'll show the outline of the upper left terminal when you've selected it, and the same with the lower right.

Windows 7 takes this to the logical extreme. When alt-tabbing, all windows are rendered as outlines. (Well, shadows, actually. Which is effectively the same, just harder to see.) If you have ten windows open, you have no hope of distinguishing between them, except the 1/16 scale pictures of the window that the menu provides.

It's The Team, Stupid


If anyone working on these features had sat back and thought about them even briefly, they would have realized that their implementations were worse than useless. This is a case of keeping up with the Joneses without knowing what the Joneses are doing.

Microsoft stole these features, but when you're stealing a feature, you can at least get it vaguely right. With Microsoft's resources, they should be able to surpass the features they stole. This can only be a result of gross incompetence.

Wednesday, May 20, 2009

Monoculture == good?

Being a performance-minded person, I wanted to compare the speed of my dependency injection / inversion of control container, dconstructor, to the speed of other popular IoC containers. Since I am most familiar with C#, my first thought was to try Castle Windsor.

My day job involves programming in C# on Windows, but at home, I use Linux exclusively. So this means using Mono. Perhaps unsurprisingly, Castle Windsor does not work on Mono. It has dependencies on C#3.5 libraries that are only partially implemented in Mono.

I began to consider, at this point, the relative situation between Mono and D. In the past, I have gotten internal compiler errors in both, and much more frequently than I would expect with a production-quality compiler. In Mono, these usually related to libraries -- perhaps with Castle, sometimes; never with any library I thought might typically be developed with Mono. In D, it was usually my code, though I'll grant that my D code involves fewer libraries in many cases.

The primary issue is that C# libraries can target two platforms, one of which is significantly more stable and has a more extensive standard library and much larger installation base. There was a time, I am told, when C# libraries would contain workarounds for differences and errors present in Mono and not in .NET, but this attitude is no longer prevalent.

In D, there is only DMDFE. All libraries target it.

Another helpful element is that the primary way to distribute a library with D is by source. It's quite doable to fix up problems in a library that come about as a result of compiler and stdlib differences. There's not much you can do with an opaque DLL. This makes problems seem more insurmountable than they might be.

All told, though, I think that D's relative monoculture is a good thing. There are two or three teams independently working on the same open source codebase, so there's less worry about D suddenly going away if Walter ceases to distribute it. But I can assume that any maintained library in D will continue to work with up-to-date compilers.

That might change when Dil comes around, but I still look forward to Dil. I've seen DMDFE, after all.

Tuesday, May 19, 2009

Lifecycle support in dconstructor

Dconstructor now has lifecycle support.

What does this mean? It means your objects will die a fiery death.

Previously, dconstructor supported two lifecycles: Instance and Singleton. Instance indicated that an object was entirely transient and had to be rebuilt each time it was required. Singleton indicated that an object was permanent and could be safely reused from the time it was first created onward to the end of time.

This has been generalized. Each object builder is now associated with a lifecycle. When it builds, it gets a lifecycle ID from the lifecycle. The next time it builds, it asks the lifecycle if its ID is still valid. If the ID is valid, then the builder returns the same instance; otherwise, it creates a new instance.

When registering a type, you can specify a lifecycle. You can also set a default lifecycle, and create LifecycleProviders to determine policy for particular types.

This is a part of dconstructor.build2, of course.

Dconstructor updates

Just a quick note about dconstructor: I've added dconstructor.build2, which is nearly a drop-in replacement for dconstructor.build. (Interceptors require modification.)

This reduces the executable bloat significantly -- in one mid-size example, dconstructor previously was responsible for a 75% increase in executable size (4.8MB to 8.3MB) and is now only responsible for about 100KB. The new version is less than 5% slower.

This also greatly decreases compilation times. examples/speed.d compiles in 3.4s rather than 14s.

The update is highly recommended, unless you need templated interceptors. Additionally, default_builder will soon be changed to use build2, since default_builder does not allow you to inject interceptors.

Thursday, May 14, 2009

The Visitor Pattern and Extensibility

I've been dealing with Dil lately. It's a compiler project for the D programming language, written in D. It makes use of the visitor pattern to provide semantic analysis. (The visitor pattern is a way of achieving dynamic dispatch via strongly typed overloads. It was a clever hack when it was first created.)

One issue with the visitor pattern is that it requires a lot of boilerplate code. If you want to have multiple semantic passes, you need to use an interface, and that interface requires a method for each type. If your visitor only cares about ten types and the interface supports fifty, you have a problem.

This doesn't much matter for dil -- very few types will not matter for any given visitor.

But let's look at a different problem. Let's say we want to log all visitor actions. Where do we do this? There are two choices: every single visitor method, or every single visited class. Similarly if we want to filter visited items, or set a breakpoint, or anything interesting like that.

I'm working on semantic analysis in Dil right now. For cleanness, I would like to split semantic analysis into possibly many phases. However, I would also like to combine semantic passes when possible for efficiency. To do this, I need to create a visitor that will coordinate between several visitors. This is an unreasonably large task with the current Visitor pattern implementation.

What else could I use, though?

In the past, I've used this concept:


void delegate(Node)[ClassInfo] handlers;
void visit(Node node)
{
if (auto ptr = node.classinfo in handlers)
{
auto dg = *ptr;
dg(node);
}
}


This design is sufficiently minimal that it's easy to do a fair bit with it:

  • Log each visited node

  • Use multiple handlers per node type

  • Use the same handler for multiple node types

  • Filter or preprocess nodes based on some criterion available in the base class

  • Ignore various node types by not writing any code for them



There is one problem with it, though: you have to do a lot of casting. In C#, you can use reflection to invoke the methods without caring about their types. In D, you can write a little wrapper:


class Invoker(T : Node)
{
void delegate(T) dg;
void invoke(Node node)
{
debug
{
// safety: cast and check
auto theNode = cast(T)node;
assert (theNode !is null, "expected: " ~ T.stringof ~
"but was " ~ node.classinfo.name);
}
else
{
// efficiency: force cast and assume
auto theNode = *cast(T*)&node;
}
return dg(theNode);
}
}
handlers[type] = &(new Invoker(dg)).invoke;


A slightly simpler version is possible with D2's closures.

There is one problem with our solution: it uses associative arrays, which could be slow.

If you know what types you support in advance, you can map each to an index to get a very fast lookup. This requires a fast means of getting this index, however. I'm generally inclined to just use the associative array; it's going to be small in any case, so it will not incur a significant penalty in most cases.