Latest Publications

Yes Virginia, You Can Work on Great Technology at Startups

You can work on great technology at startups. You wouldn’t think that would be a controversial statement. But it is if you believe Ted Tso’s defense of Google, “Google has a problem retaining great engineers? Bullcrap.” Ted dismisses the engineering that goes on in a startup, saying:

Similarly, you don’t work on great technology at a startup.  Startups, by and large, aren’t about technology — at least, not the Web 2.0 startups like Facebook, Foursquare, Twitter, Groupon, etc.   They are about business model discovery.  So if you are fundamentally a technologist at heart, whose heart sings when you’re making a better file system, or fixing a kernel bug, you’re not going to be happy at a startup.   At least, not if the startup is run competently.

Ted might have a point about Web 2.0 startups, but there are still  technology startups in software. These startups generally need to prove out their product and market rather than their business model. Business model innovation is sometimes part of the exercise. But more often the company is executing on a standard business model, with some need to validate the market, a greater need to validate/implement the technology, and most importantly a need to link the innovative technology to an addressable market. Much has been written about this, because it is the traditional structure of startups.

Web 2.0 startups are trendy right now because they are disturbingly capital efficient. Companies like Diapers.com and Groupon have negligible technology risk. Proving out the business model costs very little money in the age of Everything as a Service (EaaS). They generate good stories about selling virtual goods before they exist, of zero-inventory supply chains and zero-employee companies. Investors like the idea of low risk high reward returns, even if they are still uncomfortable with the decreased emphasis on capital.

But while those companies are grabbing headlines and mindshare there is plenty of deep technology innovation going on in startups. There are more innovative database startups at various stages in their life than I can remember right now (e.g. Vertica, Clustrix, Tokutek), not to mention the NoSQL startups (Cloudera, Basho), messaging companies (Solace, Kaplan, 29west), visualization companies (Panopticon, Spotfire), and hundreds of other software startups with a sizable technical product innovation challenge ahead of them. And there are plenty of recent success stories that wouldn’t have been able to build their company without great technology (VMware, Google, Amazon).

It’s great that business model innovation is well enough understood that it is top of mind for developers. Understanding what key innovations are required, be they business or technical, and what are the most efficient ways to validate them, is key to success in any startup. It’s too bad that some engineers think that there is no longer a place for great engineering at startups. Not all startups require great engineering, but many still do.

Ted’s trying to defend Google against claims that Facebook is poaching all the engineers. From where I stand, he’s right. Plenty of great engineers are going to work at Google, more than are leaving. And Google is able to run projects like ChromeOS, LLVM, and AppEngine. Projects that wouldn’t be the same in a startup.

But if you were going to find fault with Google in this, consider: Googlers now believe they are doing engineering that can’t be done anywhere else. If that was true, it would mean they don’t have anything to fear from startups. Believing that is a step towards the hubris and ossification that Google is working so hard to avoid.

Three Months Without Cable

As was widely reported in the media, the second and third quarter of 2010 show a steady decline in cable subscriptions. This is earth-shaking for the cable companies, who have seen growth in US subscribers over their entire history. It’s a key indicator of not only consumers being more careful with their spending, but the rise of Internet-delivered media as a compelling alternative.

In August of this year I joined the ranks of people “cutting the cord”. I was moving, and when we set up Verizon FiOS at the new house we left off video. Three months later, I’d like to fill you in on how it has gone and what I see in the future of consumer video delivery.

Leer más »

Captchas: The Bear Proof Trash Can Problem

Not actually my captcha fail, but same idea, a captcha in greek from linked inLately I’ve been selling a lot of things on Craigslist. Along with adventures in capitalism, every post to Craigslist requires filling out a CAPTCHA, specifically a reCAPTCHA. I’ve noticed that they have gotten quite difficult. In fact, at least one of the captchas I got recently was in Greek.

Captchas area a really clever idea, but they represent a special kind of arms race. The spammers are always improving their automatic and semi-automatic captcha solvers. At the same time, the average web user is not getting any better at solving captchas. The goal of the captcha company is to hit the window between what motivated spammers can do automatically and what web users can do manually.

I call this the Bear-Proof Trashcan Problem. If you have ever walked up to a trash can in a bearful park, you know the experience. The instructions on the trash cans keep getting longer, the mechanical bits more complicated and more hidden. The result is tourists leaving trash outside the cans, which is as bad as not bear-proofing the cans at all. But if the cans are simpler, or require less manual dexterity, bears figure them out. The bears are willing to put a lot of time into it. And as one park ranger put it, “The smartest bears are smarter than the dumbest tourists.”

When you are building software license enforcement, or writing tax law, or creating frequent flyer programs, you face the same problem: the desirable majority is willing to spend much less time dealing with whatever you create than the undesirable minority is going to spend breaking it. Very often people forget this rule, and build systems which focus on preventing the undesirable behavior, driving away the desirable but uncommitted majority. It’s easy to build a bear proof trash can. It’s hard to build one that a tourist can use.

Proactive Assumption Violation: Avoiding Bugs By Behaving Badly

Bugs are a fact of life in software, and probably always will be. Some bugs are probably unavoidable, but a lot of bugs can be avoided through good architecture, defensive programming, immutability, and other techniques. One major source of bugs, especially frustrating bugs, is non-deterministic behavior. Every programmer has experienced bugs which don’t reproduce, which require a special environment, or special timing, or even just luck to make happen. To avoid these bugs, programmers learn to favor determinism, making sure their software behaves the same way every time.

But sometimes a little extra non-determinism can help to avoid bugs. When designing a library the specification may contain caveats which the implementation does not exercise. If a program only ever uses one implementation, or doesn’t exercise the full range of behaviors while testing, then the program may depend on behaviors which are an artifact of the implementation. When an alternative implementation is used, or circumstances exercise other behaviors, then the program will exhibit bugs. What I would like to suggest is that instead the implementation proactively violate any assumptions the client might have, by deliberately and non-deterministically taking advantage of all the caveats in the interface specification. This will force clients to code defensively, and help to eliminate this class of bugs.

For example, an interface for Set might include iteration, but say that iteration order is unspecified. Most Set implementations, like Java’s HashSet, will iterate in a stable order if the set doesn’t change. The order might even be consistent from one run of the program to the next. But if programs depend on iteration stability, substituting a different implementation, such as one based on a splay tree, will introduce bugs. If instead the implementation of Set iteration deliberately returned a different iteration order each time, then programs would be unable to depend upon it.

For a real-world example, consider the standard C function memcpy. According to the specification, if the source and destination buffers overlap, behavior is undefined. But what does undefined really mean? Recently, Linux switched to a new memcpy implementation, one which copies backwards (high bytes first, low bytes last), because it is faster on modern hardware. The result is a dramatic change in overlapping buffer behavior, leading to difficult to isolate bugs like Red Hat Bug 639477: Strange sound on mp3 flash website. The bug was eventually tracked down to memcpy using valgrind. But if the original memcpy had been more deliberately harmful in the case of overlapping buffers, than bugs like this would not be created in the first place.

Another place where this comes up is thread safety. Many APIs are not thread safe, but the results of using them in a thread-unsafe way are benign, or unlikely. Take the Java DOM API for XML documents. This is not a thread safe API, which is not surprising for a complex mutable data structure. What is a bit surprising is that even just reading the Java DOM from multiple threads can have unintended consequences. This is because of a cache for reuse of Node objects, and the failure mode is that very occasionally accessor functions return null when they are called from multiple threads simultaneously. Debugging a program that was suffering from this behavior took several hours, because the undesirable behavior is very infrequent. Is there a way we can apply the principle of proactive assumption violation to make this sort of bug less common?

System that cope with infrastructure faults by degrading their behavior are another case where proactive assumption violation can reduce bugs later. NoSQL databases are well known for taking approaches like eventual consistency in order to offer better performance and availability. But that means that when the system is under heavy load or suffering from partial outages, consistency may take a long time to resolve. I ran into this as a user of Netflix the other night. My television and my laptop had two different ideas of what my current Netflix queue contained; my television couldn’t see the recent updates. It turns out there is even a slide deck from Netflix describing the architecture choices that led to my undesirable user experience. Most of the time things are consistent enough that I wouldn’t see this asynchrony. But when I do see it, as a user there is no obvious way to get around it or even to know that it is happening. Would a NoSQL infrastructure that let consistency drift more often end up with better average user experience?

Clients of interfaces often make assumptions about how those interfaces work, assumptions that are explicitly or implicitly not part of the spec. But if implementations of the interface don’t violate those assumptions, then programs can be developed which require them to be true. This leads to unexpected and expensive bugs if and when those assumptions are violated. One solutions is for implementations to deliberately violate these assumptions, for no other reason than to force clients of their interface to future-proof. The result is more work up front for programmers, but fewer bugs in the long run.

What do you think about proactive assumption violation? Is it a technique you have ever used? Have you experienced bugs which would be avoided if others had employed proactive assumption violation?

What the Rally for Sanity Meant for Me

The weekend before last my family went to the Rally to Restore Sanity in Washington DC. We went for a few reasons: because we hadn’t been to DC in a while, because a lot of our friends would be in town, because it was a cheap trip thanks to points/miles. It did end up being a fun trip, even if the rally was too crowded to be safe for the baby and we saw less of the stage presentation than we would have seen on TV. Many friends were in town, everyone was in a good mood, and there were lots of things to see and do.

Since the rally and the election, several people have asked me what the rally was about. Some of these people asked out of complete ignorance, others in a more confrontational way. I realized that I didn’t have a great answer for what the rally was about. For some it was a cult of personality thing. For others it was an opportunity to hang out with fellow members of the internet culture. As a political action, it’s hard to point to much success though.

The rally didn’t really work to unify people. It was kind of an un-political rally. Like the NoSQL database movement, defined by what it is not rather than what it is. The rally was not about confrontational politics, sound bytes, 24-hour news cycles, and a divisive approach to our nation’s challenges. It was not about party politics, or third-party politics, or any specific political agenda. To the extent there as an agenda, it was a media agenda. But that’s just Jon Stewart’s agenda, because he is a media personality criticizing media behavior. Based on the number of Reddit signs I saw, most people at the rally have already opted out of the mainstream media.

The question is what they can opt into. Stewart may have organized the rally, but he is ill equipped to lead a movement. And there were a lot of people at the rally, myself included, who might be ready to join a movement. The problem is that movements are about branding, and branding is about media, and if your goal is to opt out of the current media structures, how do you built a movement with a voice? If your goal is to avoid extremism, how do you get people enthused enough to accomplish anything? And what would the movement be trying to do anyway?

As I thought more about this, I think the desire of most people at the rally is for more fact-based, moderate, and cooperative politics. But what policies or actions will actually bring that about? The best I’ve been able to come up with is to work to end gerrymandering and to improve election structures. Gerrymandering leads to “safe” districts, which are then decided by primary voters, who favor uncompromisingly partisan candidates. The simplistic “first past the post” voting schemes used for congressional and state legislative elections put a heavy emphasis on party politics and encourage gerrymandering. People who study elections have lots of superior systems for both districting and voting, but only a handful of elections in the United States use them.

The Rally for Sanity demonstrates that there is a large group of people interested in seeing more moderate and constructive politics. I think the best way to make that happen is with election reform. But where does that lead? The most compelling election reform group I find is FairVote.org, but I don’t feel like they have a lot of momentum. Election reform is often a fairly technical idea. Will many of my fellow rally-goers be interested?

What activity to you see coming out of this rally, or coming out of the silent majority of frustrated moderate American voters? How are you supporting improved political discourse?

Lean Startups and the Theory of the Firm

If you spend much time in the entrepreneurial corners of the blogosphere, you’re certain to have heard about lean startups. If you haven’t, check out Eric Reis and Steve Blank. The core of the lean startup is two related ideas: continuous validation and building the smallest company that can validate an idea. The result is dramatically reduced costs, reduced time-to-failure, and reduced risk. A lot has been written and can be written about validation. But what I’m concerned about now is how small the smallest possible company is, and specifically why it is usually more than one person.

In business school you are likely to encounter the Theory of the Firm. If you haven’t been to business school, but you grew up in the modern west, it may seem strange to think that you need a theory to explain the existence of big companies. But actually, big companies are a recent innovation, something that came about in the later part of the Industrial Revolution, the early 20th century. Adam Smith, when conceiving the famous Invisible Hand of capitalism, had no concept of the international megacorporation. His pin makers worked in small groups, with the free market guiding their interactions.

If the markets are efficient, there ought not be any need for corporations. People can freely associate to pursue their various goals, exchanging money for goods and services, each pursuing their own ends. In fact, a large corporation represents an imposition on the free market, where a group of people (employees) decide to transact with each other and the owners of the corporation under special rules. The question at hand is why they do that, and why are some specializations best kept inside the firm while others are commonly contracted out.

The large corporation may be a phenomenon of the 20th century, brought about by efficiencies of scale, inefficiencies of communication, concentrations of management and financial expertise. Or there may be fundamental value to the corporation. The theory of the firm enables us to reason about why companies exist, and whether they will persist.

The most widely understood theory of the firm is that of Ronald Coase, based on trasnsaction costs. In Coase’s model, having a service provider within the firm is economically advantageous when the cost of transacting for a service or asset with an outside party exceeds the inefficiency of bringing the service or asset inside the firm. To answer whether a given function belongs inside the firm, from office cleaning services to recruiting to software development, examine the costs associated with contracting for the function, compared to the efficiency gained from getting the service on the free market. This theory is very attractive for the modern lean startup. In the 21st century more and more functions, from graphic design to office space, are being standardized, commoditized, and delivered on liquid markets like 99designs. As communications technology improves, transaction costs go down, and firms should get smaller. These are exciting times.

But there are other approaches to understanding the firm. The paper which precipitated this blog post, Eric Van den Steen’s Interpersonal Authority in a Theory of the Firm (via Marginal Revolution), finds substantial value in the firm to create goal-alignment. In his model, consider two parties with two business opportunities (for example, building a product and selling it) deciding how to pursue them. If their two opportunities have are substantially interdependent, but their decisions are made independently, then each is in danger of being spoiled by the other. If instead one party takes a controlling role, offering the other party appropriate incentives, then the likelihood of being spoiled drops out and it is more likely that both business opportunities will be successful. And further, the optimal incentives in this case looks more like salary than like partnership, because the goal is to get the employee to do what they are told, rather than what they think will be most successful.

The take-away for the lean startup is that you must include in your firm the people, skills, and assets from whom you require alignment to a common goal. You can outsource anything where the practitioner can pursue their own profit maximization and not impact your focus. The meta take-away is that the theory of the firm is still open to innovative interpretations. For anyone interested in studying entrepreneurship, it’s important to understand the economics underlying the organizations that are being created.

Apple is a Luxury Brand, Android Will Never Be

Recently I’ve had several good conversations about exactly what business Apple is in. They have clearly transcended their traditional role as a computer maker. Some people think that Apple has become a media company, or a telecommunications company. What they have really done is to become a luxury brand. As a luxury brand they are shielded from feature- and performance-based competition, enjoying higher margins and more stable revenues than other consumer electronics firms. The future of the iPhone and iPad and their strategy for competing with Android will be based on Apple’s luxury brand.

In laptops and desktops, Apple is unassailed as the luxury brand. Whenever I talk to non-engineers about buying laptops or desktops, this is clear. If I suggest they get a Mac, the response is never “I don’t know if it will run my software” or “I prefer Windows”, but rather “I can’t afford one” or “it feels unnecessary.” Rather like if someone asked me what car to buy and I suggested a BMW. As far as I can tell, non-geeks would all buy Macs if money were no object. And there is a strong correlation between people who display what computer they use socially (geeks, coffee-shop denizens) and Mac users (gamers have their own tastes and displays). Thanks to the fact that the OS no longer matters, consumers are free to select either a utilitarian lowest-bidder machine, or a Mac.

Apple doesn’t really have any competition in this market. Sony has tried several times, and makes some really nice (and really expensive) machines. But because the Sony brand still doesn’t mean luxury to the man on the street, it doesn’t give people the opportunity to show off that they require from their luxury goods. And so Apple has a near-monopoly on expensive computers. In June 2009, Apple had 90% market share for non-business computers costing more than $1000. Their consumers are not price sensitive, and so Apple gets correspondingly high margins, creating a lucrative and stable business.

The iPod is also a luxury good, albeit a luxury that nearly everyone can afford. I think it is a fluke that Apple dominates the portable music player market. I think the iPod is the Coach purse of the Apple lineup: It’s a luxury good, but one that nearly anyone can afford and is easy to justify. And with those ubiquitous white earbuds, you can show off your good taste even when the player is in your pocket.

Speaking of the iPod, some people think that Apple is becoming a media company, leveraging their control of the player into domination of the music business. Far from it, I would say. Apple is happily participating in the demise of the music industry, carrying the record labels in a hand basket towards the free or nearly-free distribution of recorded music that is the obvious conclusion of technological improvements. If you told Steve Jobs that all music is going to be free tomorrow, the logical response would be “great, people are going to need new iPods with more storage.”

The iPad is clearly a luxury good. Early adopters proudly show off their iPad. It’s very expensive and has little competition. There is a big question as to how the market for tablets will develop. It may go the way of MP3 players, an expensive but pleasant toy where everyone buys the nice one from Apple. Or it may look more like the modern PC world, where anyone can get a decent table from Acer/Dell/HP/etc for $200. A lot depends on how broad the demand is for tablets.

The iPhone initially headed in the direction of the iPod, looking like mainstream consumers were choosing between an iPhone and no smartphone at all. But Android has created a credible option, in fact a wealth of credible option, that more practical consumers see as the better option when it comes to price, service availability, etc. But Apple still sells plenty of phones to people who want the new iPhone, even if the antenna is broken, the service is terrible, and their preferred carrier doesn’t have it.

Expect Apple to maintain a high price point and air of exclusivity around the iPad and iPhone. In the face of dozens of perfectly adequate Android competitors, Apple may well cede the low end of the market. Their branding, integration, and user experience will allow them to capture a premium price at the high end. Their product line will stay simple; customer’s aren’t interested in the optimal price/performance or choosing features. Customers just want the new Apple device, and will not be especially conscious about the price or comparisons to third party products.

Much has been written about developers fleeing iOS for Android. It’s true that Apple has been difficult to do business with. I expect mobile app developers to realize that Apple has the customers they want. Years ago, Apple was able to keep developers on the Mac platform when their market share was in the low single digits, because the average Apple user bought a lot more third party software than the average Windows user. Similarly, by hanging on to the high-end, $4-latte-drinking customer, Apple will be the place to go for developers selling $4 apps. Expect comparisons of per-user app spend to be forthcoming, and the numbers to be in Apple’s favor.

Apple has figured out how to be the only mass market luxury vendor in desktops, laptops, and MP3 players. By applying the same techniques to tablet computers and mobile phones, they might not maintain raw market share, but they can hang on to the most profitable customers, which is more important. They will do it not by offering the best products on some absolute scale understood only by geeks, but by offering a user experience that starts in the store, a brand which is increasingly well recognized, and a set of stories that tell people they are buying something more than just luxury.

Non-Destructive Process Inspection on OSX: Blog Post Recovery

Moments ago I was writing a different blog post, about home renovation. Unfortunately just as I posted it a bug in ecto, the aging client I still use to edit blog posts, caused the complete loss of the text with no backup copies. Because messing about with system tools is more fun than rewriting that blog post, I present instead a brief howto for creating a core dump on OSX without killing a process, and inspecting that core dump to attempt to recover your data.

As many of my readers probably already know, a core dump is an image of the contents of a programs memory. Generally core dumps are created when a program fails in a particularly catastrophic way, such as a segmentation fault. Core dumps help programmers find out what lead to the failure. Usually it’s easy to get a core dump, you just do a kill -11 of the process ID, faking a segfault. This takes down the program and writes a core dump.

Unfortunately since core dumps aren’t useful to non programmers, the environment on OSX by default does not make them, even when there are segmentation faults. One can change this for a given shell or process or login session using the ulimit command or the associated syscalls, but Murphy was with me today and so ecto was not running with such a setting. It is possible to change the setting of a running process by connecting with a debugger like gdb and making the right syscall, but that felt a little risky, since if I messed it up the process would be gone, whether I got a core dump or not.

Instead, let’s figure out how to not kill the process at all, doing the work of taking the memory snapshot ourselves. This should be possible with modern process inspection APIs. The book you want for this is Mac OSX Internals. I have a copy, and was all set to begin some deep yak shaving figuring all this out. However, the book saw me coming, and already laid out an example in detail, called Process Photography.

So, if you want to know a lot more about making your own core dump utility, you can read that post. Or if you are still with me because you just want to know how to recover a blog post, then go there and download gcore-1.3.tar.gz at the end of the post. Untar it and compile your gcore utility. Now you can create a core dump by running gcore -c ecto.core PID. If your experience is like mine, this will generate a 1.3 gigabyte core file, because modern programs are not shy about using memory, virtual and otherwise.

Now, this 1.3 gigabyte core file contains everything from program text and mmaped files to active memory to freed memory that hasn’t been reused yet. It’s a vast expanse of stuff you don’t need, must of it binary. Luckily, most programs will just store textual content as ASCII or UTF8. Assuming you were writing English, then the strings utility will be sufficient to find your text. So you can run strings ecto.core > ecto.strings. This will generate another large file (64 megs this time, not 1.3 gigs) with just the ascii string data from your programs memory. Still a lot to wade through, so I use grep -i to look for uncommon words in my post, and less to be able to page around the file quickly.

I wish that the story had a happy ending, but after all that I found that the ecto memory space contained a dozen copies of my post, but all of them were the truncated version that it had posted on my blog, rather than the actual text I wrote. So you will have to wait until next week (or at least tomorrow) to learn what I had to say about house renovation.

Platonic Browser Session Management

Firefox just crashed, and when I restarted it I was informed that it had some trouble reopening my 115 tabs. Understandable, but I went ahead and clicked the button that encouraged it to try harder. The result, after consuming what would have been several thousand dollars of computer time back when they charged for it, is that I’m back in the mess I made myself, with an unmanageable collection of pages open, covering topics like faucets, washing machines, coffee brewing, JVMs, whatever else came out of Google Reader, and the research I was doing for this post.

Our civilization has had tabbed interfaces since 1988. Mainstream browsers (well, Mozilla) have supported tabbed browsing since 2001. As tabbing has become more mainstream, as the web has gotten more complex, and as computers have gotten fast enough to handle dozens of open web pages, people have opened more and more tabs. The result is that nearly everyone knows the experience of “just having to close some tabs” before you reboot, or so you can get on with more important work. It’s easy to overwhelm yourself with the amount of content you can have open in tabs, and clearing it out is often an archeological experience spanning the last week or month of your web activities.

Browser tab management represents the greatest software usability challenge of our time. We are all facing information overload of one form or another, and this is an opportunity to improve the way people find, consume, retain, and manage information. Lately we are seeing a lot of attempts at innovation here, from Chrome’s tear-away tabs and performance optimization, to Firefox extensions like Ctrl-Tab. Most recently today we saw Tab Candy, a preview of new functionality in Firefox 4.

I’ve often said that I’ll switch to the first browser that doesn’t make me feel guilty for having 100+ open tabs. Looking at Tab Candy and other innovations, I see that we are moving in the right direction. But there are still many aspect of the problem that aren’t being sufficiently addressed.

  • Application-oriented tab organization – Many of the sites I use today are really applications, whether it is Google Docs, Facebook, or Amazon. Taking application behavior into account, and helping me to avoid opening the same heavy application (e.g. gmail) multiple times is part of getting tab management correct.
  • Automatic tab organization – Systems like Tab Candy require that users manage tabs. But wouldn’t it be better if tabs were managed and grouped automatically?
  • Managing the tab/bookmark continuum – Leaving something open in a tab, or opening it in a tab, is often a way of deciding to come back to it later. Bookmarks accomplish the same thing, but most people’s bookmarks are themselves a usability nightmare. Automatically migrating tabs into bookmarks and bookmarks back into tabs might be the solution to a lot of these tab problems.
  • Excursion and history management – Web browsing isn’t a linear process, the way browser history would have you think. It is at least a branching tree, which tabs support. But often the process of web browsing is a product itself, as when researching a new subject or deciding on a purchase. Being able to not only manage the process as it is happening, but also to archive it for later resumption or reference, would improve the efficiency of many browser tasks.
  • CPU efficiency – An implementation detail to be sure, but one of the obstacles to running with 100+ tabs is the CPU load of all the little Javascripts. Some way to manage this, pausing or closing pages which are not visible, is likely to be required.

As you can see, there is still plenty of room for improvement in the browser interface, particularly around large numbers of tabs. What other cutting edge stuff have you seen? What would you like to see implemented?

DEBS 2010 Highlights

I spent the first half of last week at DEBS 2010 at King’s College in Cambridge, UK. It was a great conference, many good papers and interesting attendees. As usual some of the best ideas came from the hallway sessions. But I’d like to provide some pointers to my favorite papers of the conference:

  • David Jeffery’s keynote on Betfair’s event driven architecture, past present and future. David presented not only on the performance challenges of running the core betting exchange, but also on the soft benefits. Thanks to their event driven architecture, Betfair is able to do more experimentation with less disruption, and be more agile as an organization.
  • Dan O’Keeffe presented Reliable Complex Event Detection for Pervasive Computing, a system for compensating for missing data. Most importantly, it lays out a selection of strategies that developers can select and the system can use to automatically choose the correct approach, based on wether the system should be optimistic, pessimistic, or something in between.
  • Quilt: A Patchwork of Multicast Regions was presented by one of the local students on short notice, because the author was not able to attend due to visa problems. This is especially frustrating when the paper is so interesting and the analysis quite strong. Quilt is a system for combining multiple delivery mechanisms to achieve efficient wide area distribution. Combining overlay-network based protocols with “patches” of true multicast where available, Quilt optimizes message routing for efficiency, reliability, and latency.
  • An Approach for Iterative Event Pattern Recommendation which was also presented by a colleague rather than one of the authors due to visa issues. The paper describes a system for recommending event patterns to domain experts based on their initial attempts to define the pattern. In a controlled user study, the recommendation system substantially reduced the time taken by users to define novel patterns. It was good to see a real user study of a programming efficiency system, but there is lots of room for further measurement.
  • Experiences with Codifying Event Processing Function Patterns, presented by the author Anand Ranganathan, dealt with a different kind of pattern. In systems using Ranganathan’s system, developers build template network flows, annotated with tags to define what components can be used in the flow. Any component that matches the inputs, outputs, and tags can be inserted into the template, and all possible template instantiations represent a domain of valid applications. End users are able to search this combinatorially large domain with a flexible interface, to find and instantiate applications that meet their needs.

There are many great papers and talks not listed here. Obviously my own bias shows, as does the fact that I didn’t attend every talk or read every paper.

Unfortunately most of the links go to the ACM portal, which doesn’t offer public access. My frustration with and thoughts on the future of academic publishing will have to wait for a future post. I recommend googling the titles and finding them on author pages if you can. Or ping me and I can send you papers.

Next year the conference will be hosted in New York by IBM research. Whether you attended or not, if you have any feedback on how to make the conference more appealing, I’m happy to hear it.