Latest Publications

Passive Personal Networking: How to Let Others Network for You

Talking to some friends recently, I’ve realized that many people don’t think of themselves as networkers. They are reluctant to get started playing that “game”. Even when they are highly capable engineers looking for a new position at a startup, they don’t want to “ask their friends to find them a job.”

Everyone knows that networking is the best way to get a job. Not everyone is prepared to attend “networking events”, hand out business cards to people they meet, or spend all their time maintaining professional relationships. If you are one of these people, I have good news. You can still benefit from networking to find jobs and other opportunities. Merely by being non-hostile, open to the possibility of networking, you can benefit from the networks of people you already know, your friends and coworkers.

The reason is simple: networkers need you. People in the world who network, who spend lots of time maintaining relationships, are participating in a gift economy. They trade favors, introductions, contacts, and other information. Most networkers are looking for win-win situations, connections they can make which benefit both parties. When they get you a job, they are also helping someone fill a position.

These networkers need raw material, which comes from people like you who are not otherwise plugged into the network. By being open to networking, you let them help you. Here are three simple principles to be open to networking:

  • Be open. When you meet someone socially, and they are interested in you, tell them about yourself. If you are looking for a new job, or new opportunities, or about to finish a program at school, or an expert in some part of your field, feel free to volunteer that fact. This isn’t begging, it is giving people the opportunity to help you out.
  • Be specific. The more specific you are when telling people about your interests, the easier it is for them to help. No one wants to flood their network with a request for “a job”, but if you are “an experienced robotics engineer looking for a clean tech startup,” that message is valuable and easy to route to the right place.
  • Be appreciative. People who network do it because they like helping people. But do not immediately respond with a gift or other token. Networking is long term game, and there will likely be opportunities to reciprocate in the future. Or you can pay it forward, by doing your own part to help others in a similar way.

Networking doesn’t have to mean pushy conversations with strangers. By maintaining your existing social relationships, and being open, specific, and appreciative, you can let other people do the networking for you. Good luck.

Analysis of May 6th: The Importance of Near Misses

Since writing about stock market crashes and normal accidents, I spent even more time talking about the events of May 6th. Good analysis is starting to come out. The best I have seen so far is Nanex’s Flash Crash Analysis. Their conclusion is that the crash was precipitated primarily by a queuing and timestamping bug at NYSE, which lead to understandable but unfortunate flash mobbing by high frequency trading firms attempting to execute against the delayed prices being quoted out. I recommend their analysis and the supporting charts, which are quite compelling.

Nanex makes a few concrete suggestions: NYSE should fix their timestamping logic, HFT firms should be discouraged from what they call “quote stuffing”, and quotes should have a minimum time to live of 50 milliseconds. The last suggestion is the most interesting, and would have a significant impact on market dynamics, but possibly not a significant impact on prices or the availability of liquidity. Many other markets (Brazilian equities, US futures) have some kind of charge for canceling or modifying orders. This kind of restriction or discrimination doesn’t prevent high frequency traders from operating, but it does change their strategies, and it might help to discourage runaway markets.

What I have found myself recommending, rather than these changes to address the immediate problem, is a need for cultural change, to identify potential future system accidents and resolve them before they make headlines. In part 2 of the Nanex report, they identify two previous days on which a similar delay in NYSE quotes occurred without triggering a run on the market. With 20-20 hindsight, one might guess that investigations of these previous events would have been sufficient to prevent or limit the damage on May 6th.

Unfortunately, the culture of American capital markets is hyper-competitive, and that’s something our governments has encouraged through deregulation. Market operators are in competition with one another, brokers are in competition, trading firms (HFT and otherwise) are in competition. A culture of secrecy, and of exploiting weaknesses, makes investigating anomalous market events and other bugs very difficult. Technological transparency is important for the health of the whole financial system, but firms aren’t yet ready for it.

Short of distasteful regulatory enforcement, it’s hard to see how we would get people to participate in any kind of information sharing. But if we want to avoid future instances of market, we’ll need to find a way. There are always going to be bugs in the software we use to operate our financial system. The question is whether we find them among friends, or if we wait for the bugs to make headlines.

Normal Accidents and Stock Market Crashes

In the weeks since the precipitous and brief stock market crash on May 6th, I have found myself answering questions about it from people outside the capital markets and discussing it with insiders on many occasions. While I have some thoughts about what went on, I’m often unable to satisfy people’s desire to blame a single precipitating cause. I think what is going on is that too few people understand the nature of complex systems and what is called a “normal accident.” Given the sophistication of the markets, the number of safety checks and balances, as well as the complexity of the implementations, it is not surprising that events such as May 6th happened, nor should people think it is possible to entirely eliminate them.

Normal Accidents (or as wikipedia calls them “System Accidents“) are major failures caused by unintended and unexpected interactions of many small failures. The term was coined by Yale professor Charles Perrow in his book Normal Accidents: Living with High-Risk Technologies. Complex systems fail for complex reasons. In systems engineered for safety and redundancy the failures that do happen require many contributing factors. Perrow’s focus was on large industrial systems, such as power plants, chemical plants, aircraft, shipping, and military operations. TIme and again we see complex failures in places like Three Mile Island, the Challenger shuttle, or BP’s current oil spill.

In a normal accident, the contributing factors come from many areas and often many organizations. Errors result from poor regulation, lack of training, operator error, specification errors, mechanical failures, lax maintenance, poor morale, organizational structures, economic incentives, and many other areas. Because systems are tightly coupled, many of these factors are able to mutually reinforce one another to lead to systemic failure. The resulting cascade of failures can look like a Rube-Goldberg machine in it’s complexity.

In these tightly coupled systems, potential-normal-accidents are happening all the time. Systems are too complex to be entirely without failures. However, in the common case these partial failures are caught and resolved quietly. In fact, these near misses are an opportunity to understand the unintended failure modes of the system. Rather than build once and deploy, safety must be a continuous process of improvement and understanding. Systems aren’t stable and they are not deployed in a vacuum. As they evolve, failures and near misses must be examined and used to drive improvements.

Software, especially modern networked software, dramatically increases the incidence of normal accidents. As anyone who has ever created, deployed, and debugged software knows, it is common for individual software bugs to have all the characteristics of a normal accident all by themselves. Add together software written by multiple different organizations connecting over a network and it’s a wonder anything works at all.

Getting back to the events of May 6th, the “Flash Crash“, they are best explained as a system accident. People have tried to blame one cause or another, from a fat fingered trader or a faulty brokerage system, to investor agitation over Greek debt and high frequency trading firms going wild, to the NYSE hybrid market system, bugs in other members of the national market system, and outdated circuit breaker regulations. Without going into detail about all these potential causes, I’d like to suggest that the most likely explanation is that all of these causes, together, are what created the exceptional failure of market prices, broken trades, and finger pointing. No one cause is really more precipitating than any other, and apportioning blame is much less important than understanding in detail what happened.

It is impossible to eliminate normal accidents as we increase the complexity of our systems. The best we can do is to learn from accidents, and from near misses, to introduce the kind of slack in our systems that will protect us from the worst accidents. Learning requires transparency. But in systems which cross organizational and regulatory boundaries, with billions of dollars and reputations at stake, transparency is going to be a challenge.

PS: If you’re interest in learning more, I suggest this NASA powerpoint on normal accidents.

A Timeless Way of Building or Why do all these houses suck?

Lately I’ve been looking at a lot of houses. I’ve also been reading A Timeless Way of Building (ATWB). The net result has been a deep dissatisfaction with the available housing stock in Arlington, and probably in the entire United States. So while I would like to recommend the book, it comes with the disclaimer of being hostile to casual house hunting. Instead it will help you develop opinions about everything.

I started out reading ATWB because of the Computer Science implications. Design patterns, a popular notion in software development, are based on the notions of pattern language developed in ATWB by Christopher Alexander. Alexander’s book is about the architecture of buildings and towns, rather than of computer programs. But I wanted to get back to the source to understand where design patterns came from. More on that later.

As a book about architecture, being read by a layman (me), A Timeless Way of Building is fabulous. It lays out the general notion of patterns, and helps you begin to understand why some buildings work while others do not. Beyond design principles for good buildings, ATWB lays out the societal drivers for bad buildings in our culture. Good buildings are defined by patterns, patterns that work together to form a pattern language. Bad buildings generally result from failing to understand the pattern it is trying to follow, or from not having a pattern at all.

A pattern is a way of build something, a functional unit of building. For example, a parlor at the front of the house, or a front porch, or a farmhouse kitchen. Some patterns may nest inside other patterns, for example an eating alcove might be part of a farmhouse kitchen. A pattern language combines a set of interdependent and self-sustaining patterns to form an ecosystem of buildings that work well together. For example, a pattern language might describe everything from the town square to the livestock pens of a rural French farming village.

Alexander’s patterns are based around human behavior. The pattern only comes about because of how the users interact with the building, and often how the culture of the users constructs that behavior. A front porch isn’t really a front porch until you sit on it in the evening, and neighbors out for a stroll stop and say hello. A farmhouse kitchen isn’t just a big room with lots of work surface, plumbing, and appliances; the work that goes on there defines the pattern of the space, and makes it fit into the pattern language around it.

In Alexander’s mind, modern construction and architecture is blind to pattern languages because we have separated the concerns of the users from the builders.

Traditionally when farmers built a cow barn (or when the Amish build a cow barn today) the people building the barn were experts in its use. They were cow farmers themselves, from the local community. The barn was built, with only small variations, in the same way all the other cow barns were built, because that worked. And if some variation in the construction process interfered with cow farming, the builders would be capable of identifying it and correcting it.

Today, when computer scientists decide to build a research center, the job is put out to bid by university committees composed of administrators and researchers. An architect is selected, a building is designed, and building firms are contracted. Through the process new ideas are developed and handed down the chain to be implemented. But at the end of the day, the workers pouring the concrete know nothing about the work that will be done in the new building. And neither do the foremen, the draftsmen, or anyone else. The architect might know a little, but is unlikely to have ever gotten hands on with the work. And the computer scientists, who understand their work, don’t feel they have standing to participate in the building process.

The result can be a woefully inadequate building. The two cases I’m familiar with, are the MIT Media Lab and the Stata Center. Both are cutting edge buildings, and very nice in some ways. But both also have many problems which have been chronicled by their residents as well as independent authorities. The Media Lab features prominently as a failure in How Buildings Learn. The Stata Center is the cover story of Architecture of the Absurd. The latter in particular blames architects and the procurement process. But from the perspective of pattern languages the problems are more systemic.

Getting back to single family houses, they do not suffer from the university procurement problem. But they do suffer from a lack of understanding by builders of how the buildings will be used.

One might expect that any individual would know how a single family home is to be used. This might be true, but what we all lack is a shared pattern language that helps our homes and our neighborhoods work together with our lifestyles and our culture. And that’s a tall order. Our lifestyles and culture are changing dramatically from one generation to the next, faster than we replace our housing stock. It’s not surprising that a house built for young families in 1950 doesn’t match a young family in 2010. Or a three-decker built for middle class professionals in 1910 doesn’t precisely fit the needs of nine grad students in 2010.

But even if we look at new construction it is hard to identify clear patterns. There are some features people like, such as granite counters, big closets, multi-car garages, and open floorplans. But these don’t come together to form a pattern language. They don’t say how large the family will be, how it will take meals, or how social entertaining will be organized. New houses built on spec are designed to sell, with curb appeal and attractive luxury features prioritized over usability. People buy what they’ve seen on TV, even though most of the houses they see on TV only have three walls.

Reading a book on capes, I learned that the cape house was designed to be built over time as your family grew. You’d start with a fireplace and two rooms, the door on one side. When you had children, you would build the other side of the house. They were easy to extend, adding breezeways, outbuildings, workshops. Or add dormers upstairs and sleep on the second floor. The evolution of the house mirrored the lifestages of the family.

In 2010, few families if any build their house in stages this way. They buy complete houses, and either move or have custom renovations done by professionals when the house no longer meets their needs. And families don’t follow a single path through the stages of life. Some families live multigenerationally, others do not. Some families entertain, or have formal meals, or cook, or don’t cook. Families have 0, 1, 2, or more children. All these choices and more mean that your neighbors probably do not live like you.

And that, more than the discipline of modern architecture, is why we have lost our pattern language for single family homes. Pattern languages evolve slowly, as new structures are built. But houses last 60 years or more. 60 years ago there was no birth control, no pizza delivery, no internet, no supermarkets, and no thermal glass. Most families had one car and milk was delivered daily.

Our culture is changing too fast for any evolved pattern language to keep up. We’re stuck with buildings created through intelligent design. Unfortunately, intelligent design isn’t very good.

Device Convergence, or how I learned to stop worrying and love the Kindle

A number of months ago I posted my disappointment in the version 1 Kindle. I’ve also tried out the version 2, and continue to be convinced that the Sony Reader is a better piece of hardware for dedicated book reading.

But (if there wasn’t a but there wouldn’t be much of a post here) the Sony eBook store is painfully terrible. Titles are expensive, hard to search for, and often not available. The result is that my book buying on the Sony gradually trailed off. On my last two business trips I haven’t bothered to bring the Reader, and it sits on a shelf right now, probably running out of battery.

Last week Aletta wanted a book which was available from Amazon, and didn’t want to wait for it. She downloaded the iPhone Kindle app, bought the book, and was pleasantly surprised. Reading on a small screen is more pleasant than either of us expected, and the Kindle app is quite well designed.

I carry an iPod Touch with me basically everywhere. Switching to Kindle means I can have my books with me even when I don’t want to carry a dedicated eink-reader. I have the option of buying a dedicated reader if I want one.

The only downside is that book ownership is still restricted to a single account, from what I can tell. Aletta and I solve this by keeping all our ebooks on a single account, and that’s no worse than Sony. But there is definitely room for improvement in managing household book collections and book sharing. Hopefully between Sony, Amazon, Apple, and Google we have enough competition to find a good set of structures.

In summary, on Kindle:

  • Books are easy to buy
  • Books availability is superior
  • Books are available across multiple devices
  • Books are available on devices I already own
  • Book access feels a little more future proof against DRM fail. Or at least if there is fail, there will be a critical mass of people building cracking tools.

I hope Amazon gets an Android port out soon, and starts encouraging other companies to make eink-readers that support Kindle. It could be a great ecosystem.

Sony, it’s time to realize that user experience is about a lot more than just the industrial design of the physical product.

End of the World Insurance: the Financial Halting Problem

In computer science, the halting problem is very well known. The problem states that it is impossible to build a software program that can analyze other software programs to determine if they will eventually terminate, or halt. This is a useful problem to understand, because many software problems that look possible at first can be reduced to the halting problem and thus demonstrated to be impossible. It’s common to hear someone say “actually, that seems like a halting problem” when discussing compiler optimization, program analysis, and related problems in computer science. This is much like a physicist might say “but that’s perpetual motion.”

In the sphere of financial derivatives, our civilization has recently come to understand that there are a whole class of financial products which look attractive, and perform reasonably well some of the time, but which eventually fail. The most obvious of these are the credit default swaps of the latest crisis. But other examples include portfolio insurance, made famous in the 1987 market crash. The problem with these products is that they are designed to protect the buyer against losses in all circumstances, even when the market is behaving badly. But when the market is behaving badly, it can behave very badly. These products reduce to end of the world insurance. When the world is ending, who is left to pay out the insurance?

I think there is a useful parallel to the halting problem. If your new financial product can be used as end of the world insurance, it probably will be. And since end of the world insurance is fundamentally flawed, it should raise questions about what your product is really accomplishing.

Quick House Update

My last post ended with me losing the house due to being outbid. In a strange turn of events we may have won the bidding war without ever submitting the highest offer. I’ll try to provide more details at a future date, if it turns out they are interesting. At the moment we are still in negotiation with the seller.

A Computer Scientist Bids on a House

I was going to favor you all with a post about Java’s System.nanoTime. That post will have to wait until tomorrow. Instead, I spent the day (arguably the weekend since 3:15pm on Friday) putting in a bid on a house. I won’t bore you with the details of property, inspections, financing, etc. However, I think the details of the bidding process are quite interesting.

To quote a friend of mine, when asked how we much we should bid on the house:

In a first-price auction, there’s no dominant strategy. :( However, by the Law of Revenue Equivalence, the seller will, on average, get the second-highest valuation of all the bidders. So, if you think you value the house more than everyone else, all you have to do is guess what the next-best buyer would be willing to pay for it, and bid slightly more than that to win.

Yes, well, that is entirely correct, but unfortunately unhelpful. We are dealing with a non-repeating negotation (so “on average” doesn’t apply). And it’s remarkably unclear that the auction will be run according to any rules at all. Predicting the behavior of the seller and the sellers agent is quite challenging. On Saturday at brunch a different friend recommended The Strategy of Conflict by Thomas Schelling, from the era of game theory research that brought us such gems as “mutually assured destruction.” Unfortunately there wasn’t time to read it.

Obviously (obvious to anyone who hangs around certain kinds of mathematicians) the right thing is to have a Vickrey auction, where the top bidder pays the price of the second highest bid. But this is far from obvious to realtors. They treat making an offer as a very expensive operation. And counteroffers from the seller seem to be very uncommon. There is a lot of concern about “offending” or missing out on an offer. I’m not sure why. If someone wants to buy your house on Monday for $x, they probably still want to buy it on Wednesday. Unless maybe their Realtor is reading something into the delay.

One is left wondering if the world would converge on more optimal auction structure without real estate agents muddying the waters. Or if, as eBay has demonstrated, people would rather keep working in an easy to understand system over an optimal one. Unlike eBay, the current system is quite complicated, with many conventions and few rules. In the age of Redfin, Realtors have an obvious interest in preserving the status quo, since understanding the rules is the only thing they have left. In the past, they at least had priority access to listings and historical sale data. Now they are left understanding the negotiating process.

In the end we offered $410k for the house. There were 5 offers, three of them at a similar price point to ours, $10k over asking. There was one offer at $430k, so the sellers decided to accept that offer, rather than encouraging further bidding. Case closed, transaction completed, Realtor happy. Money might have been left on the table, but getting it would have required both work and risk on the part of the agent.

Synthetic Biology and Big Ball of Mud

Researchers at the MIT OpenWetWare project are attempting to engineer Synthetic Biology, creating reusable and composable biological components that can be combined to create useful organisms. In the process, they are discovering that biological systems don’t follow the same patterns of good architecture familiar to us from software.

In software engineering, architecture is perceived as critical to the success of a large implementation project, and also to the ongoing maintenance of the code created. Since software is very expensive, there is a lot of focus on architecture, choosing the right architecture, and making sure that the architecture is faithfully executed. Some of the biggest changes in software in the last 30 years, like object oriented programming and service oriented architecture, have come about because of a need for clearer architecture in ever larger collections of software that run modern life.

Architecture analysis also extends to describing anti-patterns, those things which do not work and are to be avoided. Probably the most famous anti-pattern is the Big Ball of Mud. A big ball of mud is what you get when software has evolved for many years without any clear architecture, where all abstraction barriers and divisions of responsibility have eroded in the interest of expediency and incremental functionality. The interesting thing about big balls of mud is that as a rule they work, and they are remarkably common. It can be very difficult to maintain them, almost maddening. But as long as they are economically valuable enough to justify their care and feeding, they tend to persist.

What people fail to realize about big balls of mud is that they represent a certain kind of efficiency. Maybe not efficiency from a top down, best way to solve the problem kind of approach. But every change is efficient in its own way. Engineers, either fearful of breaking the system or just lazy, make the smallest possible modifications, or the ones they are best able to understand. Abstractions are violated, variables and identifiers are reused, values are overloaded. All of these changes represent efficiently as much as they represent “bad” software engineering.

Biological systems follow the same pattern. Because of the pressures of evolution, they are filled with abstraction violations, repurposed functionality, and tight coupling. The mechanics of genetics and evolution drive change, and generally result in the smallest possible sufficient change. Tight coupling leads to efficiency and reuse. Like a big ball of mud in software, the system is difficult to understand by decomposing it into parts. Like these software systems, complex biological systems are filled with shortcuts that work most of the time, implementation details that blossom into major behaviors.

Read Montague covers this in his recent book Your Brain is (Almost) Perfect: How We Make Decisions. The original title was the clever Why Choose This Book (bringing to mind Abbie Hoffman’s Steal This Book). While most of the narrative focused on decision making in the face of limited information, the overriding principle that Montague argues from is that biological systems favor efficiency above all else. He argues that to make more efficient computers, they will need to emulate both the frailties and the strengths of biological systems.

As a student of programming languages and programming methodology, I’m intrigued by the potential for developing software systems that are highly efficient due to their high degree of coupling. The closest thing currently available is whole program analysis like the Haskell Jhc compiler or the GCC Link Time Optimizer. However, while both of these systems consider the whole program for optimization, neither is likely to produce a highly coupled or minimal program. For that the state of the art is the superoptimizer, either the original superoptimizer from 1987 or more recent attempts like TOAST. In a superoptimizer, the entire space of potential programs is searched to find the shortest implementation of a given function. This can be very helpful in core tight loops, but exhaustive search is only helpful for very simple functions. None of these compare to the results of millions of years of evolution.

But what if we had something comparable to millions of years of evolution, but for software? For example, if we had nigh-infinite computational power available inexpensively by using cloud servers in off hours, how could we take advantage of it. To me, the interesting question is not how to build an evolution machine, but rather how to constrain it so that the result is a program you find useful.

In biology and bioengineering, the approach is called forced evolution. There are a variety of techniques, but the one I’m familiar with works for cases where you want to produce chemical X from source chemical Y. First, a bacteria is engineered by any means necessary that has a biological pathway that can survive by producing X from Y, however inefficiently. Next, all other ways for the bacteria to survive are knocked out, by disabling the genes that make those pathways work. Finally, the organism is left to reproduce and evolve in a Y-rich environment. Hopeful those organisms which best optimize the Y->X pathway are the ones that reproduce more and come to dominate in this environment. The result should be a bacteria much more efficient than could have been designed from scratch, at least using current bioengineering.

Can we use forced evolution to build better software? I think this breaks down into two parts. First, can we make this work? The place where evolution of simple bacteria has the advantage is in parallel computation. Each instance of the bacteria not only has its own variant of the software but also implements its own instance of the hardware to run it. Right now computational resources, even for simple compute jobs, are still many orders of magnitude more expensive than biological resources. That said, they are falling fast. 1 minute of compute on EC2 is fourteen-hundredths of a cent, the spot price, for low-priority computation, is a third of that, and the price of a fixed amount of computation is falling steadily. This kind of workload is quite happy to take advantage of modern parallel processors. Force evolution of computational processes might be economical before we know it.

The second questions is, assuming all that works, would we want the software it produced? Today, scientists are spending vast resources trying to understand the wetware developed by natural evolution, The worst software systems are quite simple in comparison. I would expect software created by evolution to be quite opaque. More worryingly, it might be sensitive to various characteristics of the machines or environment in which it evolved. The software would likely contain dead code, unexercised race conditions, unprotected parallel data access, and many other artifacts of unrestrained expedient modifications. A simple unit test suite, even with perfect coverage, would not be able to catch all these issues. We would need a stricter environment, a set of limitations on the solution space or tests for the result that would cull badly behaved implementations, rather than allowing them to take over.

If we could figure out how to control the correctness and environmental sensitivity of evolved software, then we could also control it for designed software. Given that most designed software rapidly trends towards big ball of mud, the most immediate benefit of any work in this area might not be controlling pseudo-biological processes, but in controlling the human driven processes, keeping them from getting out of hand.

Acela WiFi: Finally Here, Could Be Better

Riding the Acela to New York yesterday, I had my first opportunity to try out the new WiFi service. I’m glad to see Acela moving into the 21st century and joining the ranks of BoltBus and LimoLiner by offering WiFi on trips to New York. It’s been a long time coming, and now that it is here I was looking forward to having good bandwidth through the swamps of Connecticut, rather than depending on my fickle EVDO card.

Unfortunately the experience was less than wonderful. Latency was about double that of EVDO, averaging 450 milliseconds with spikes to 2-3 seconds. That meant that ssh connections were difficult to type across and web browsing had a distinctly 20th century feel to it. I wasn’t able to analyze throughput because I couldn’t get most of my web-based tools to load. Suffice to say throughput was disappointing.

I did some tracerouting and analysis, and I think a lot of the problems are in the first hop. First hop latency averaged 200 milliseconds. I’m pretty sure that first hope was on the train, and means that the local 802.11 network is overloaded. I wonder what level of provisioning they planned for. Looking around on Acela it seems like well over 50% of passengers are using laptops, which is a lot of laptops together in a thin metal tube.

The second hop hits a variety of different IPs, which might relate to some kind of multiplexed EVDO connection. These are internal non-routable IPs. The first public IP comes at the third hop, and is a router on HopOne.net in their Washington, DC area data center. I suspect the second half of the bad latency comes in this WWAN connection and the routing of the messages to DC. I’m not sure how it could be done better, but one imagines it could route directly to one of the national wireless networks. The MBTA commuter rail does this with AT&T, as I understand it.

So overall, I’m glad Amtrak has implemented WiFi, and now that it’s here I hope they improve it to make it usable. I fear they will just start charging to cut down on the overload. But it would be better for the world if they do not, and simply socialize the cost over all passengers. It won’t be long before 100% of passengers have some kind of WiFi device.