55 stories
·
0 followers

Converting Queries to Commands

2 Shares

When methods focus on a single responsibility, they fall into two broad categories. They are either queries that compute and return some value without producing side effects, or they are commands that change the state of the world or the state of the object they are on.

The former style is more common in functional programming. In fact, we can say that functional programming is all about organizing our code around pure queries. In contrast, OO biases toward methods that return void. We call them to send a message to an object, telling it something. The object can then change its own state or state someplace else in the world.

When refactoring, it can be useful at times to change from one style to the other. One particular case is when code queries an object only to go back to the same object and do things to it. Here’s an example:

  if (customer != null && customer.isActive()) {
    if (notice.isQuarterly()) {
      customer.clearQuarterlyNotices();
    }
    customer.addNotice(notice);
  }

Here we have a bit of work that happens conditionally based upon a query to Customer. If the state of the object is active we can do all of it.

When I look at code like this and consider how to refactor it, I often think about whether I should extract the code of the if-statement into a new method or extract the entire if-statement. I know that the work belongs on the Customer class but how do we deal with the conditionality of the code? I think we can get an answer by going back to the basics of object orientation.

OO’s primary advantage is decoupling. We send messages to objects and it is up to them to decide what to do. This view of OO comes from Alan Kay and it takes quite a while to internalize. One of the things you have to accept is that when you tell an object to do something there’s no guarantee that it will actually do it. You could, for instance, tell a graphical widget to move but it may not. It could be a null object that simply receives messages and does nothing. These objects can be very useful in systems but you have to maintain the mental frame: what is done depends upon the object. The method calls we make communicate intent but the object bears the responsibility.

In the code above, we have some work that can happen in a customer. We’d like to move it there, but first, let’s imagine what it would look like if we were telling something to do the work rather than asking whether it should be done.

customer.whenActive((c) -> {
    if (notice.isQuarterly()) {
      c.clearQuarterlyNotices();
    }
    c.addNotice(notice);
 });

This code looks a lot like our original code. The difference is that we’ve replaced our query about the state of Customer with a method that executes a lambda on the customer object (this) when it is active.

In this context we’re not asking, we’re telling. The lambda is pretty much exactly the body of the if-statement we had previously.

We don’t have to go this far, though. I was just illustrating the how the block of an if-statement with a conditional made of queries can be seen as a command. Let’s extract the original if-statement and move it onto Customer:

class Customer {
  
  public void acceptNotice(Notice notice) {
    if(isActive()) {
      if (notice.isQuarterly()) {
        clearQuarterlyNotices();
      }
      addNotice(notice);
    }
  } 

}

How do you feel about the fact that this method may not add a notice? What we have to think about is whether the name acceptNotice is likely to be construed as indication that the customer will hold onto it. For the most part, the names we use can help communicate the degree of decoupling we want. Function names in procedural code often describe the work that they do. The possibility of polymorphism in OO lets us be more abstract. We can name them after our intentions and allow that objects may do the work whatever way they wish. In this case, I’m using the name acceptNotice, which is about as noncommittal as you can get with regard to the work that the method will be doing. In a way it’s like giving an object some data and saying “your turn!”

If I wanted to generalize a bit more I’d use a name that hints at event-iness, like onNewNotice.

class Customer {
  
  public void onNewNotice(Notice notice) {
    if(isActive()) {
      if (notice.isQuarterly()) {
        clearQuarterlyNotices();
      }
      addNotice(notice);
    }
  } 

}

When we move from queries to commands, we often have to raise the abstraction level of names. It’s fine to do. For me, it aligns with my philosophy that objects are for decoupling.

Read the whole story
davehng
2674 days ago
reply
Share this story
Delete

Why You Should Never Use MongoDB

1 Comment

Disclaimer: I do not build database engines. I build web applications. I run 4-6 different projects every year, so I build a lot of web applications. I see apps with different requirements and different data storage needs. I’ve deployed most of the data stores you’ve heard about, and a few that you probably haven’t.

I’ve picked the wrong one a few times. This is a story about one of those times — why we picked it originally, how we discovered it was wrong, and how we recovered. It all happened on an open source project called Diaspora.

The project

Diaspora is a distributed social network with a long history. Waaaaay back in early 2010, four undergraduates from New York University made a Kickstarter video asking for $10,000 to spend the summer building a distributed alternative to Facebook. They sent it out to friends and family, and hoped for the best.

But they hit a nerve. There had just been another Facebook privacy scandal, and when the dust settled on their Kickstarter, they had raised over $200,000 from 6400 different people for a software project that didn’t yet have a single line of code written.

Diaspora was the first Kickstarter project to vastly overrun its goal. As a result, they got written up in the New York Times – which turned into a bit of a scandal, because the chalkboard in the backdrop of the team photo had a dirty joke written on it, and no one noticed until it was actually printed. In the NEW YORK TIMES. The fallout from that was actually how I first heard about the project.

As a result of their Kickstarter success, the guys left school and came out to San Francisco to start writing code. They ended up in my office. I was working at Pivotal Labs at the time, and one of the guys’ older brothers also worked there, so Pivotal offered them free desk space, internet, and, of course, access to the beer fridge. I worked with official clients during the day, then hung out with them after work and contributed code on weekends.

They ended up staying at Pivotal for more than two years. By the end of that first summer, though, they already had a minimal but working (for some definition) implementation of a distributed social network built in Ruby on Rails and backed by MongoDB.

That’s a lot of buzzwords. Let’s break it down.

“Distributed social network”

If you’ve seen the Social Network, you know everything you need to know about Facebook. It’s a web app, it runs on a single logical server, and it lets you stay in touch with people. Once you log in, Diaspora’s interface looks structurally similar to Facebook’s:

A screenshot of the Diaspora interface

A screenshot of the Diaspora user interface

There’s a feed in the middle showing all your friends’ posts, and some other random stuff along the sides that no one has ever looked at. The main technical difference between Diaspora and Facebook is invisible to end users: it’s the “distributed” part.

The Diaspora infrastructure is not located behind a single web address. There are hundreds of independent Diaspora servers. The code is open source, so if you want to, you can stand up your own server. Each server, called a pod, has its own database and its own set of users, and will interoperate with all the other Diaspora pods that each have their own database and set of users.

The Diaspora Ecosystem

Pods of different sizes communicate with each other, without a central hub.

Each pod communicates with the others through an HTTP-based API. Once you set up an account on a pod, it’ll be pretty boring until you follow some other people. You can follow other users on your pod, and you can also follow people who are users on other pods. When someone you follow on another pod posts an update, here’s what happens:

1. The update goes into the author’s pod’s database.

2. Your pod is notified over the API.

3. The update is saved in your pod’s database.

4. You look at your activity feed and see that post mixed in with posts from the other people you follow.

Comments work the same way. On any single post, some comments might be from people on the same pod as the post’s author, and some might be from people on other pods. Everyone who has permission to see the post sees all the comments, just as you would expect if everyone were on a single logical server.

Who cares?

There are technical and legal advantages to this architecture. The main technical advantage is fault tolerance.

Here is a very important fault tolerant system that every office should have.

If any one of the pods goes down, it doesn’t bring the others down. The system survives, and even expects, network partitioning. There are some interesting political implications to that — for example, if you’re in a country that shuts down outgoing internet to prevent access to Facebook and Twitter, your pod running locally still connects you to other people within your country, even though nothing outside is accessible.

The main legal advantage is server independence. Each pod is a legally separate entity, governed by the laws of wherever it’s set up. Each pod also sets their own terms of service. On most of them, you can post content without giving up your rights to it, unlike on Facebook. Diaspora is free software both in the “gratis” and the “libre” sense of the term, and most of the people who run pods care deeply about that sort of thing.

So that’s the architecture of the system. Let’s look at the architecture within a single pod.

It’s a Rails app.

Each pod is a Ruby on Rails web application backed by a database, originally MongoDB. In some ways the codebase is a ‘typical’ Rails app — it has both a visual and programmatic UI, some Ruby code, and a database. But in other ways it is anything but typical.

The internal structure of one Diaspora pod

The visual UI is of course how website users interact with Diaspora. The API is used by various Diaspora mobile clients — that part’s pretty typical — but it’s also used for “federation,” which is the technical name for inter-pod communication. (I asked where the Romulans’ access point was once, and got a bunch of blank looks. Sigh.) So the distributed nature of the system adds layers to the codebase that aren’t present in a typical app.

And of course, MongoDB is an atypical choice for data storage. The vast majority of Rails applications are backed by PostgreSQL or (less often these days) MySQL.

So that’s the code. Let’s consider what kind of data we’re storing.

I Do Not Think That Word Means What You Think That Means

“Social data” is information about our network of friends, their friends, and their activity. Conceptually, we do think about it as a network — an undirected graph in which we are in the center, and our friends radiate out around us.

Photos all from rubyfriends.com. Thanks Matt Rogers, Steve Klabnik, Nell Shamrell, Katrina Owen, Sam Livingston-Grey, Josh Susser, Akshay Khole, Pradyumna Dandwate, and Hephzibah Watharkar for contributing to #rubyfriends!

When we store social data, we’re storing that graph topology, as well as the activity that moves along those edges.

For quite a few years now, the received wisdom has been that social data is not relational, and that if you store it in a relational database, you’re doing it wrong.

But what are the alternatives? Some folks say graph databases are more natural, but I’m not going to cover those here, since graph databases are too niche to be put into production. Other folks say that document databases are perfect for social data, and those are mainstream enough to actually be used. So let’s look at why people think social data fits more naturally in MongoDB than in PostgreSQL.

How MongoDB Stores Data

MongoDB is a document-oriented database. Instead of storing your data in tables made out of individual rows, like a relational database does, it stores your data in collections made out of individual documents. In MongoDB, a document is a big JSON blob with no particular format or schema.

Let’s say you have a set of relationships like this that you need to model. This is quite similar to a project that come through Pivotal that used MongoDB, and was the best use case I’ve ever seen for a document database.

At the root, we have a set of TV shows. Each show has many seasons, each season has many episodes, and each episode has many reviews and many cast members. When users come into this site, typically they go directly to the page for a particular TV show. On that page they see all the seasons and all the episodes and all the reviews and all the cast members from that show, all on one page. So from the application perspective, when the user visits a page, we want to retrieve all of the information connected to that TV show.

There are a number of ways you could model this data. In a typical relational store, each of these boxes would be a table. You’d have a tv_shows table, a seasons table with a foreign key into tv_shows, an episodes table with a foreign key into seasons, and reviews and cast_members tables with foreign keys into episodes. So to get all the information for a TV show, you’re looking at a five-table join.

We could also model this data as a set of nested hashes. The set of information about a particular TV show is one big nested key/value data structure. Inside a TV show, there’s an array of seasons, each of which is also a hash. Within each season, an array of episodes, each of which is a hash, and so on. This is how MongoDB models the data. Each TV show is a document that contains all the information we need for one show.

Here’s an example document for one TV show, Babylon 5.

It’s got some title metadata, and then it’s got an array of seasons. Each season is itself a hash with metadata and an array of episodes. In turn, each episode has some metadata and arrays for both reviews and cast members.

It’s basically a huge fractal data structure.

Sets of sets of sets of sets. Tasty fractals.

All of the data we need for a TV show is under one document, so it’s very fast to retrieve all this information at once, even if the document is very large. There’s a TV show here in the US called “General Hospital” that has aired over 12,000 episodes over the course of 50+ seasons. On my laptop, PostgreSQL takes about a minute to get denormalized data for 12,000 episodes, while retrieval of the equivalent document by ID in MongoDB takes a fraction of a second.

So in many ways, this application presented the ideal use case for a document store.

Ok. But what about social data?

Right. When you come to a social networking site, there’s only one important part of the page: your activity stream. The activity stream query gets all of the posts from the people you follow, ordered by most recent. Each of those posts have nested information within them, such as photos, likes, reshares, and comments.

The nested structure of activity stream data looks very similar to what we were looking at with the TV shows.

Users have friends, friends have posts, posts have comments and likes, each comment has one commenter and each like has one liker. Relationship-wise, it’s not a whole lot more complicated than TV shows. And just like with TV shows, we want to pull all this data at once, right after the user logs in. Furthermore, in a relational store, with the data fully normalized, it would be a seven-table join to get everything out.

Seven-table joins. Ugh. Suddenly storing each user’s activity stream as one big denormalized nested data structure, rather than doing that join every time, seems pretty attractive.

In 2010, when the Diaspora team was making this decision, Etsy’s articles about using document stores were quite influential, although they’ve since publicly moved away from MongoDB for data storage. Likewise, at the time, Facebook’s Cassandra was also stirring up a lot of conversation about leaving relational databases. Diaspora chose MongoDB for their social data in this zeitgeist. It was not an unreasonable choice at the time, given the information they had.

What could possibly go wrong?

There is a really important difference between Diaspora’s social data and the Mongo-ideal TV show data that no one noticed at first.

With TV shows, each box in the relationship diagram is a different type. TV shows are different from seasons are different from episodes are different from reviews are different from cast members. None of them is even a sub-type of another type.

But with social data, some of the boxes in the relationship diagram are the same type. In fact, all of these green boxes are the same type — they are all Diaspora users.

A user has friends, and each friend may themselves be a user. Or, they may not, because it’s a distributed system. (That’s a whole layer of complexity that I’m just skipping for today.) In the same way, commenters and likers may also be users.

This type duplication makes it way harder to denormalize an activity stream into a single document. That’s because in different places in your document, you may be referring to the same concept — in this case, the same user. The user who liked that post in your activity stream may also be the user who commented on a different post.

Duplicate data Duplicate data

We can represent this in MongoDB in a couple of different ways. Duplication is any easy option. All the information for that friend is copied and saved to the like on the first post, and then a separate copy is saved to the comment on the second post. The advantage is that all the data is present everywhere you need it, and you can still pull the whole activity stream back as a single document.

Here’s what this kind of fully denormalized stream document looks like.

Here we have copies of user data inlined. This is Joe’s stream, and it has a copy of his user data, including his name and URL, at the top level. His stream, just underneath, contains Jane’s post. Joe has liked Jane’s post, so under likes for Jane’s post, we have a separate copy of Joe’s data.

You can see why this is attractive: all the data you need is already located where you need it.

You can also see why this is dangerous. Updating a user’s data means walking through all the activity streams that they appear in to change the data in all those different places. This is very error-prone, and often leads to inconsistent data and mysterious errors, particularly when dealing with deletions.

Is there no hope?

There is another approach you can take to this problem in MongoDB, which will more familiar if you have a relational background. Instead of duplicating user data, you can store references to users in the activity stream documents.

With this approach, instead of inlining this user data wherever you need it, you give each user an ID. Once users have IDs, we store the user’s ID every place that we were previously inlining data. New IDs are in green below.

MongoDB actually uses BSON IDs, which are strings sort of like GUIDs, but to make these samples easier to read I’m just using integers.

This eliminates our duplication problem. When user data changes, there’s only one document that gets rewritten. However, we’ve created a new problem for ourselves. Because we’ve moved some data out of the activity streams, we can no longer construct an activity stream from a single document. This is less efficient and more complex. Constructing an activity stream now requires us to 1) retrieve the stream document, and then 2) retrieve all the user documents to fill in names and avatars.

What’s missing from MongoDB is a SQL-style join operation, which is the ability to write one query that mashes together the activity stream and all the users that the stream references. Because MongoDB doesn’t have this ability, you end up manually doing that mashup in your application code, instead.

Simple Denormalized Data

Let’s return to TV shows for a second. The set of relationships for a TV show don’t have a lot of complexity. Because all the boxes in the relationship diagram are different entities, the entire query can be denormalized into one document with no duplication and no references. In this document database, there are no links between documents. It requires no joins.

On a social network, however, nothing is that self-contained. Any time you see something that looks like a name or a picture, you expect to be able to click on it and go see that user, their profile, and their posts. A TV show application doesn’t work that way. If you’re on season 1 episode 1 of Babylon 5, you don’t expect to be able to click through to season 1 episode 1 of General Hospital.

Don’t. Link. The. Documents.

Once we started doing ugly MongoDB joins manually in the Diaspora code, we knew it was the first sign of trouble. It was a sign that our data was actually relational, that there was value to that structure, and that we were going against the basic concept of a document data store.

Whether you’re duplicating critical data (ugh), or using references and doing joins in your application code (double ugh), when you have links between documents, you’ve outgrown MongoDB. When the MongoDB folks say “documents,” in many ways, they mean things you can print out on a piece of paper and hold. A document may have internal structure — headings and subheadings and paragraphs and footers — but it doesn’t link to other documents. It’s a self-contained piece of semi-structured data.

If your data looks like that, you’ve got documents. Congratulations! It’s a good use case for Mongo. But if there’s value in the links between documents, then you don’t actually have documents. MongoDB is not the right solution for you. It’s certainly not the right solution for social data, where links between documents are actually the most critical data in the system.

So social data isn’t document-oriented. Does that mean it’s actually…relational?

That Word Again

When people say “social data isn’t relational,” that’s not actually what they mean. They mean one of these two things:

1. “Conceptually, social data is more of a graph than a set of tables.”

This is absolutely true. But there are actually very few concepts in the world that are naturally modeled as normalized tables. We use that structure because it’s efficient, because it avoids duplication, and because when it does get slow, we know how to fix it.

2. “It’s faster to get all the data from a social query when it’s denormalized into a single document.”

This is also absolutely true. When your social data is in a relational store, you need a many-table join to extract the activity stream for a particular user, and that gets slow as your tables get bigger. However, we have a well-understood solution to this problem. It’s called caching.

At the All Your Base Conf in Oxford earlier this year, where I gave the talk version of this post, Neha Narula had a great talk about caching that I recommend you watch once it’s posted. In any case, caching in front of a normalized data store is a complex but well-understood problem. I’ve seen projects cache denormalized activity stream data into a document database like MongoDB, which makes retrieving that data much faster. The only problem they have then is cache invalidation.

“There are only two hard problems in computer science: cache invalidation and naming things.”

Phil Karlton

It turns out cache invalidation is actually pretty hard. Phil Karlton wrote most of SSL version 3, X11, and OpenGL, so he knows a thing or two about computer science.

Cache Invalidation As A Service

But what is cache invalidation, and why is it so hard?

Cache invalidation is just knowing when a piece of your cached data is out of date, and needs to be updated or replaced. Here’s a typical example that I see every day in web applications. We have a backing store, typically PostgreSQL or MySQL, and then in front of that we have a caching layer, typically Memcached or Redis. Requests to read a user’s activity stream go to the cache rather than the database directly, which makes them very fast.

Typical cache and backing store setup

Application writes are more complicated. Let’s say a user with two followers writes a new post. The first thing that happens (part 1) is that the post data is copied into the backing store. Once that completes, a background job (part 2)  appends that post to the cached activity stream of both of the users who follow the author.

This pattern is quite common. Twitter holds recently-active users’ activity streams in an in-memory cache, which they append to when someone they follow posts something. Even smaller applications that employ some kind of activity stream will typically end up here (see: seven-table join).

Back to our example. When the author changes an existing post, the update process is essentially the same as for a create, except instead of appending to the cache, it updates an item that’s already there.

What happens if that step 2 background job fails partway through? Machines get rebooted, network cables get unplugged, applications restart. Instability is the only constant in our line of work. When that happens, you’ll end up with invalid data in your cache. Some copies of the post will have the old title, and some copies will have the new title. That’s a hard problem, but with a cache, there’s always the nuclear option.

Always an option >_<

You can always delete the entire activity stream record out of your cache and regenerate it from your consistent backing store. It may be slow, but at least it’s possible.

What if there is no backing store? What if you skip step 1? What if the cache is all you have?

When MongoDB is all you have, it’s a cache with no backing store behind it. It will become inconsistent. Not eventually consistent — just plain, flat-out inconsistent, for all time. At that point, you have no options. Not even a nuclear one. You have no way to regenerate the data in a consistent state.

When Diaspora decided to store social data in MongoDB, we were conflating a database with a cache. Databases and caches are very different things. They have very different ideas about permanence, transience, duplication, references, data integrity, and speed.

The Conversion

Once we figured out that we had accidentally chosen a cache for our database, what did we do about it?

Well, that’s the million dollar question. But I’ve already answered the billion-dollar question. In this post I’ve talked about how we used MongoDB vs. how it was designed to be used. I’ve talked about it as though all that information were obvious, and the Diaspora team just failed to research adequately before choosing.

But this stuff wasn’t obvious at all. The MongoDB docs tell you what it’s good at, without emphasizing what it’s not good at. That’s natural. All projects do that. But as a result, it took us about six months, a lot of user complaints, and a lot of investigation to figure out that we were using MongoDB the wrong way.

There was nothing to do but take the data out of MongoDB and move it to a relational store, dealing as best we could with the inconsistent data we uncovered along the way. The data conversion itself — export from MongoDB, import to MySQL — was straightforward. For the mechanical details, you can see my slides from All Your Base Conf 2013.

The Damage

We had eight months of production data, which turned into about 1.2 million rows in MySQL. We spent four pair-weeks developing the code for the conversion, and when we pulled the trigger, the main site had about two hours of downtime. That was more than acceptable for a project that was in pre-alpha. We could have reduced that downtime more, but we had budgeted for eight hours of downtime, so two actually seemed fantastic.

NOT BAD

Epilogue

Remember that TV show application? It was the perfect use case for MongoDB. Each show was one document, perfectly self-contained. No references to anything, no duplication, and no way for the data to become inconsistent.

About three months into development, it was still humming along nicely on MongoDB. One Monday, at the weekly planning meeting, the client told us about a new feature that one of their investors wanted: when they were looking at the actors in an episode of a show, they wanted to be able to click on an actor’s name and see that person’s entire television career. They wanted a chronological listing of all of the episodes of all the different shows that actor had ever been in.

We stored each show as a document in MongoDB containing all of its nested information, including cast members. If the same actor appeared in two different episodes, even of the same show, their information was stored in both places. We had no way to tell, aside from comparing the names, whether they were the same person. So to implement this feature, we needed to search through every document to find and de-duplicate instances of the actor that the user clicked on. Ugh. At a minimum, we needed to de-dup them once, and then maintain an external index of actor information, which would have the same invalidation issues as any other cache.

You See Where This Is Going

The client expected this feature to be trivial. If the data had been in a relational store, it would have been. As it was, we first tried to convince the PM they didn’t need it. After that failed, we offered some cheaper alternatives, such as linking to an IMDB search for the actor’s name. The company made money from advertising, though, so they wanted users to stay on their site rather than going off to IMDB.

This feature request eventually prompted the project’s conversion to PostgreSQL. After a lot more conversation with the client, we realized that the business saw lots of value in linking TV shows together. They envisioned seeing other shows a particular director had been involved with, and episodes of other shows that were released the same week this one was, among other things.

This was ultimately a communication problem rather than a technical problem. If these conversations had happened sooner, if we had taken the time to really understand how the client saw the data and what they wanted to do with it, we probably would have done the conversion earlier, when there was less data, and it was easier.

Always Be Learning

I learned something from that experience: MongoDB’s ideal use case is even narrower than our television data. The only thing it’s good at is storing arbitrary pieces of JSON. “Arbitrary,” in this context, means that you don’t care at all what’s inside that JSON. You don’t even look. There is no schema, not even an implicit schema, as there was in our TV show data. Each document is just a blob whose interior you make absolutely no assumptions about.

At RubyConf this weekend, I ran into Conrad Irwin, who suggested this use case. He’s used MongoDB to store arbitrary bits of JSON that come from customers through an API. That’s reasonable. The CAP theorem doesn’t matter when your data is meaningless. But in interesting applications, your data isn’t meaningless.

I’ve heard many people talk about dropping MongoDB in to their web application as a replacement for MySQL or PostgreSQL. There are no circumstances under which that is a good idea. Schema flexibility sounds like a great idea, but the only time it’s actually useful is when the structure of your data has no value. If you have an implicit schema — meaning, if there are things you are expecting in that JSON — then MongoDB is the wrong choice. I suggest taking a look at PostgreSQL’s hstore (now apparently faster than MongoDB anyway), and learning how to make schema changes. They really aren’t that hard, even in large tables.

Find The Value

When you’re picking a data store, the most important thing to understand is where in your data — and where in its connections — the business value lies. If you don’t know yet, which is perfectly reasonable, then choose something that won’t paint you into a corner. Pushing arbitrary JSON into your database sounds flexible, but true flexibility is easily adding the features your business needs.

Make the valuable things easy.

The End.

Thanks for reading! Let me sum up how I feel about comments on this post:

Read the whole story
davehng
2874 days ago
reply
This is a very good post - if a system is using a document db, periodically consider if it the data store and caching responsibilities have been misaligned.
Share this story
Delete

The Churn

1 Share

Did you year about the guy who said goodbye to OO?

Oh no. Not another one. What did he say?

He described all the promises of OO, and how none of them had really been delivered, and that all the features of OO cost more than they were worth, and that functional programming was better and...

Sigh. Yes, I've heard it all before.

So, then, OO is finally dead, and we can move on.

Move on to what?

Why, to THE NEXT BIG THING of course!

Oh. -- That. Do you know what it is yet?

I dunno, I'm pretty excited about micro-services; and I'm really keen on Elixr; and I hear React is really cool; and ...

Yes. Yes. The Churn. You are caught up in The Churn.

Huh? What do you mean by that. These are exciting times.

Actually, I find them rather depressing.

Why? I mean, there are new technologies bubbling up every few days! We are climbing to ever higher heights.

Bah! All we are really doing is reinventing the wheel, over, and over again. And we're wasting massive amounts of time and effort doing it.

Oh come on! We're making PROGRESS.

Progress. Really? That's not the way I see it.

Well, just what is it that you see?

I see waste. Massive, incalculable, waste. Waste, piled upon waste, piled upon even more waste.

How can you say that?

Well, consider this OO issue. OO isn't dead. OO was never alive. OO is a technique; and a good one. Claiming it's dead is like claiming that a perfectly good screwdriver is dead. Saying goodbye to OO is like saying goodbye to a perfectly good screwdriver. It's waste!

But Functional Programming is better!

I'm sorry, but that's like saying that a hammer is better than a screwdriver. Functional programming is not "better" than Object Oriented programming. Functional Programming is a technique, and a good one, that can be used alongside Object Oriented programming.

That's not what I heard. I heard they were mutually exclusive.

Of course they aren't. They address orthogonal concerns. Concerns that are present in all projects.

Look there are people who think that software is a linear chain of progress. That we are climbing a ladder one rung at a time; and that every "new" thing is better than the previous "older" thing. That's not the way it works.

So, how does it work -- in your opinion?

Progress in software has followed a logarithmic growth curve. In the early years, progress was stark and dramatic. In later years the progress became much more incremental. Now, progress is virtually non-existent.

Look: Assembler was massively better than Binary. Fortran was much better than Assembler. C was a lot better than Fortran. C++ was probably better than C. Java was an improvement over C++. Ruby is probably a bit better than Java.

Waterfall was a whole lot better than nothing. Agile was better than waterfall. Lean was a little better than Agile. Kanban may have been something of an improvement.

Every year. though we apply massive effort, we make less progress than the year before; because every year we get closer and closer to the asymptote.

Asymptote! You think there's an upper limit to software technology and progress?

I absolutely do. What's more I think we are so close to that limit now, that any further striving is fruitless. We are well passed the point of diminishing returns.

What? That sounds ludicrous! That sounds depressing!

I understand. But that's because we got used to all that early rapid growth. Those were heady days; and we want them back again. But they're gone; and we have to face the fact that we are wasting time and effort on a massive scale trying to recreate them.

But if we don't push for the future; we'll never create it!

Believe me, I definitely want us to push for the future. That's not what we are doing. What we are doing is pining for the past.

So what future do you think we should be pushing towards?

A productive one. A future that is not dominated by all this wasteful churn.

What's wasteful about it?

Have you ever used IntelliJ or Eclipse to program Java?

Sure.

Those are incredibly powerful tools. A skilled professional can be wildly productive with those tools. The refactorings! The representations! The facilities! My God; those tools are spectacular!

Yet every time a new language comes along we dash away from those powerful tools to use the NEXT NEW THING. And the tools for that new language are like programming in the third world. God, you often don't even have a reasonable rename refactoring!

It takes time to build up a reasonable toolset. If we keep on switching languages, we'll never be able to tool those languages up.

But the newer languages are better.

Oh bull! They're different; but they aren't better. Or at least not better enough to justify throwing our toolset back into the stone age.

And think of the training costs for adopting a new language. Think of the cost to the organization of having to use 84 different languages because the programmers get excited about shiny new things every two weeks.

Shiny new things? That's kind of insulting isn't it.

I suppose so; but that's what it comes down to. New languages aren't better; they are just shiny. And the search for the golden fleece of a new language, or a new framework, or a new paradigm, or a new process has reached the point of being unprofessional.

Unprofessional?

Yes! Unprofessional. We need to realize that we have hit the asymptote. It's time to stop the wasteful churning over languages, and frameworks, and paradigms, and processes.

It's time to simply get down to work.

We need to choose a language, or two, or three. A small set of simple frameworks. Build up our tools. Solidify our processes. And become a goddam profession.

Read the whole story
davehng
3020 days ago
reply
Share this story
Delete

What Color is Your Function?

1 Comment and 3 Shares

I don’t know about you, but nothing gets me going in the morning quite like a good old fashioned programming language rant. It stirs the blood to see someone skewer one of those “blub” languages the plebians use, muddling through their day with it between furtive visits to StackOverflow.

(Meanwhile, you and I, only use the most enlightened of languages. Chisel-sharp tools designed for the manicured hands of expert craftspersons such as ourselves.)

Of course, as the author of said screed, I run a risk. The language I mock could be one you like! Without realizing it, I could let have the rabble into my blog, pitchforks and torches at the ready, and my fool-hardy pamphlet could draw their ire!

To protect myself from the heat of those flames, and to avoid offending your possibly delicate sensibilities, instead, I’ll rant about a language I just made up. A strawman whose sole purpose is to be set aflame.

I know, this seems pointless right? Trust me, by the end, we’ll see whose face (or faces!) have been painted on his straw noggin.

A new language

Learning an entire new (crappy) language just for a blog post is a tall order, so let’s say it’s mostly similar to one you and I already know. We’ll say it has syntax sorta like JS. Curly braces and semicolons. if, while, etc. The lingua franca of the programming grotto.

I’m picking JS not because that’s what this post is about. It’s just that it’s the language you, statistical representation of the average reader, are most likely to be able grok. Voilà:

function thisIsAFunction() {
  return "It's awesome";
}

Because our strawman is a modern (shitty) language, we also have first-class functions. So you can make something like like:

// Return a list containing all of the elements in collection
// that match predicate.
function filter(collection, predicate) {
  var result = [];
  for (var i = 0; i < collection.length; i++) {
    if (predicate(collection[i])) result.push(collection[i]);
  }
  return result;
}

This is one of those higher-order functions, and, like the name implies, they are classy as all get out and super useful. You’re probably used to them for mucking around with collections, but once you internalize the concept, you start using them damn near everywhere.

Maybe in your testing framework:

describe("An apple", function() {
  it("ain't no orange", function() {
    expect("Apple").not.toBe("Orange");
  });
});

Or when you need to parse some data:

tokens.match(Token.LEFT_BRACKET, function(token) {
  // Parse a list literal...
  tokens.consume(Token.RIGHT_BRACKET);
});

So you go to town and write all sorts of awesome reusable libraries and applications passing around functions, calling functions, returning functions. Functapalooza.

What color is your function?

Except wait. Here’s where our language gets screwy. It has this one peculiar feature:

1. Every function has a color.

Each function—anonymous callback or regular named one—is either red or blue. Since my blog’s code highlighter can’t handle actual color, we’ll say the syntax is like:

bluefunction doSomethingAzure() {
  // This is a blue function...
}

redfunction doSomethingCarnelian() {
  // This is a red function...
}

There are no colorless functions in the language. Want to make a function? Gotta pick a color. Them’s the rules. And, actually, there are a couple more rules you have to follow too:

2. The way you call a function depends on its color.

Imagine a “blue call” syntax and a “red call” syntax. Something like:

doSomethingAzure(...)blue;
doSomethingCarnelian()red;

If you get it wrong—call a red function with •blue after the parentheses or vice versa—it does something bad. Dredge up some long-forgotten nightmare from your childhood like a clown with snakes for arms under your bed. That jumps out of your monitor and sucks out your vitreous humour.

Annoying rule, right? Oh, and one more:

3. You can only call a red function from within another red function.

You can call a blue function from with a red one. This is kosher:

redfunction doSomethingCarnelian() {
  doSomethingAzure()blue;
}

But you can’t go the other way. If you try to do this:

bluefunction doSomethingAzure() {
  doSomethingCarnelian()red;
}

Well, you’re gonna get a visit from old Spidermouth the Night Clown.

This makes writing higher-order functions like our filter() example trickier. We have to pick a color for it and that affects the colors of the functions we’re allowed to pass to it. The obvious solution is to make filter() red. That way, it can take either red or blue functions and call them. But then we run into the next itchy spot in the hairshirt that is this language:

4. Red functions are more painful to call.

For now, I won’t precisely define “painful”, but just imagine that the programmer has to jump through some kind of annoying hoops every time they call a red function. Maybe it’s really verbose, or maybe you can’t do it inside certain kinds of statements. Maybe you can only call them on line numbers that are prime.

What matters is that, if you decide to make a function red, everyone using your API will want to spit in your coffee and/or deposit some even less savory fluids in it.

The obvious solution then is to never use red functions. Just make everything blue and you’re back to the sane world where all functions have the same color, which is equivalent to them all having no color, which is equivalent to our language not being entirely stupid.

Alas, the sadistic language designers—and we all know all programming language designers are sadists, don’t we?—jabbed one final thorn in our side:

5. Some core library functions are red.

There are some functions built in to the platform, functions that we need to use, that we are unable to write ourselves, that only come in red. At this point, a reasonable person might think the language hates us.

It’s functional programming’s fault!

You might be thinking that the problem here is we’re trying to use higher-order functions. If we just stop flouncing around in all of that functional frippery and write normal blue collar first-order functions like God intended, we’d spare ourselves all the heartache.

If we only call blue functions, make our function blue. Otherwise, make it red. As long as we never make functions that accept functions, we don’t have to worry about trying to be “polymorphic over function color” (polychromatic?) or any nonsense like that.

But, alas, higher order functions are just one example. This problem is pervasive any time we want to break our program down into separate functions that get reused.

For example, let’s say we have a nice little blob of code that, I don’t know, implements Dijkstra’s algorithm over a graph representing how much your social network are crushing on each other. (I spent way too long trying to decide what such a result would even represent. Transitive undesirability?)

Later, you end up needing to use this same blob of code somewhere else. You do the natural thing and hoist it out into a separate function. You call it from the old place and your new code that uses it. But what color should it be? Obviously, you’ll make it blue if you can, but what if it uses one of those nasty red-only core library functions?

What if the new place you want to call it is blue? You’ll have to turn it red. Then you’ll have to turn the function that calls it red. Ugh. No matter what, you’ll have to think about color constantly. It will be the sand in your swimsuit on the beach vacation of development.

A colorful allegory

Of course, I’m not really talking about color here, am I? It’s an allegory, a literary trick. The Sneetches isn’t about stars on bellies, it’s about race. By now, you may have an inkling of what color actually represents. If not, here’s the big reveal:

Red functions are asynchronous ones.

If you’re programming in JavaScript on Node.js, everytime you define a function that “returns” a value by invoking a callback, you just made a red function. Look back at that list of rules and see how my metaphor stacks up:

  1. Synchronous functions return values, async ones do not and instead invoke callbacks.

  2. Synchronous functions give their result as a return value, async functions give it by invoking a callback you pass to it.

  3. You can’t call an async function from a synchronous one because you won’t be able to determine the result until the async one completes later.

  4. Async functions don’t compose in expressions because of the callbacks, have different error-handling, and can’t be used with try/catch or inside a lot of other control flow statements.

  5. Node’s whole shtick is that the core libs are all asynchronous. (Though they did dial that back and start adding ___Sync() versions of a lot of things.)

When people talk about “callback hell” they’re talking about how annoying it is to have red functions in their language. When they create 4089 libraries for doing asynchronous programming, they’re trying to cope at the library level with a problem that the language foisted onto them.

I promise the future is better

People in the Node community have realized that callbacks are a pain for a long time, and have looked around for solutions. One technique that gets a bunch of people excited is promises, which you may also know by their rapper name “futures”.

These are sort of a jacked up wrapper around a callback and an error handler. If you think of passing a callback and errorback to a function as a concept, a promise is basically a reification of that idea. It’s a first-class object that represents an asynchronous operation.

I just jammed a bunch of fancy PL language in that paragraph so it probably sounds like a sweet deal, but it’s basically snake oil. Promises do make async code a little easier to write. They compose a bit better, so rule #4 isn’t quite so onerous.

But, honestly, it’s like the difference between being punched in the gut versus punched in the privates. Less painful, yes, but I don’t think anyone should really get thrilled about the value proposition.

You still can’t use them with exception handling or other control flow statements. You still can’t call a function that returns a future from synchronous code. (Well, you can, but if you do, the person who later maintains your code will invent a time machine, travel back in time to the moment that you did this and stab you in the face with a #2 pencil.)

You’ve still divided your entire world into asynchronous and synchronous halves and all of the misery that entails. So, even if your language features promises or futures, its face looks an awful lot like the one on my strawman.

(Yes, that means even Dart, the language I work on. That’s why I’m so excited some of the team are experimenting with other concurrency models.)

I’m awaiting a solution

C# programmers are probably feeling pretty smug right now (a condition they’ve increasingly fallen prey to as Hejlsberg and company have piled sweet feature after sweet feature into the language). In C#, you can use the await keyword to invoke an asynchronous function.

This lets you make asynchronous calls just as easily as you can synchronous ones, with the tiny addition of a cute little keyword. You can nest await calls in expressions, use them in exception handling code, stuff them inside control flow. Go nuts. Make it rain await calls like a they’re dollars in the advance you got for your new rap album.

Async-await is nice, which is why we’re adding it to Dart. It makes it a lot easier to write asynchronous code. You know a “but” is coming. It is. But… you still have divided the world in two. Those async functions are easier to write, but they’re still async functions.

You’ve still got two colors. Async-await solves annoying rule #4: they make red functions not much worse to call than blue ones. But all of the other rules are still there:

  1. Synchronous functions return values, async ones return Task<T> (or Future<T> in Dart) wrappers around the value.

  2. Sync functions are just called, async ones need an await.

  3. If you call an async function you’ve got this wrapper object when you actually want the T. You can’t unwrap it unless you make your function async and await it. (But see below.)

  4. Aside from a liberal garnish of await, we did at least fix this.

  5. C#’s core library is actually older than async so I guess they never had this problem.

It is better. I will take async-await over bare callbacks or futures any day of the week. But we’re lying to ourselves if we think all of our troubles are gone. As soon as you start trying to write higher-order functions, or reuse code, you’re right back to realizing color is still there, bleeding all over your codebase.

What language isn’t colored?

So JS, Dart, C#, and Python have this problem. CoffeeScript and most other languages that compile to JS do too (which is why Dart inherited it). I think even ClojureScript has this issue even though they’ve tried really hard to push against it with their core.async stuff.

Wanna know one that doesn’t? Java. I know right? How often do you get to say, “Yeah, Java is the one that really does this right.”? But there you go. In their defense, they are actively trying to correct this oversight by moving to futures and async IO. It’s like a race to the bottom.

C# also actually can avoid this problem too. They opted in to having color. Before they added async-await and all of the Task<T> stuff, you just used regular sync API calls. Three more languages that don’t have this problem: Go, Lua, and Ruby.

Any guess what they have in common?

Threads. Or, more precisely: multiple independent callstacks that can be switched between. It isn’t strictly necessary for them to be operating system threads. Goroutines in Go, coroutines in Lua, and fibers in Ruby are perfectly adequate.

(That’s why C# has that little caveat. You can avoid the pain of async in C# by using threads.)

Remembrance of operations past

The fundamental problem is “How do you pick up where you left off when an operation completes”? You’ve built up some big callstack and then you call some IO operation. For performance, that operation uses the operating system’s underlying asynchronous API. You cannot wait for it to complete because it won’t. You have to return all the way back to your language’s event loop and give the OS some time to spin before it will be done.

Once it is, you need to resume what you were doing. The usual way a language “remembers where it is” is the callstack. That tracks all of the functions that are currently being invoked and where the instruction pointer is in each one.

But to do async IO, you have to unwind discard the entire C callstack. Kind of a Catch-22. You can do super fast IO, you just can’t do anything with the result! Every language that has async IO in its bowels—or in the case of JS, the browser’s event loop—copes with this in some way.

Node with its ever-marching-to-the-right callbacks stuffs all of those callframes in closures. When you do:

function makeSundae(callback) {
  scoopIceCream(function (iceCream) {
    warmUpCaramel(function (caramel) {
      callback(pourOnIceCream(iceCream, caramel));
    });
  });
}

Each of those function expressions closes over all of its surrounding context. That moves parameters like iceCream and caramel off the callstack and onto the heap. When the outer function returns and the callstack is trashed, it’s cool. That data is still floating around the heap.

The problem is you have to manually reify every damn one of these steps. There’s actually a name for this transformation: continuation-passing style. It was invented by language hackers in the 70s as an intermediate representation to use in the guts of their compilers. It’s a really bizarro way to represent code that happens to make some compiler optimizations easier to do.

No one ever for a second thought that a programmer would write actual code like that. And then Node came along and all of the sudden here we are pretending to be compiler back-ends. Where did we go wrong?

Note that promises and futures don’t actually buy you anything, either. If you’ve used them, you know you’re still hand-creating giant piles of function literals. You’re just passing them to .then() instead of to the asynchronous function itself.

Awaiting a generated solution

Async-await does help. If you peel back your compiler’s skull and see what it’s doing when it hits an await call you’d see it actually doing the CPS-transform. That’s why you need to use await in C#: it’s a clue to the compiler to say, “break the function in half here”. Everything after the await gets hoisted into a new function that it synthesizes on your behalf.

This is why async-await didn’t need any runtime support in the .NET framework. The compiler compiles it away to a series of chained closures that it can already handle. (Interestingly, closures themselves also don’t need runtime support. They get compiled to anonymous classes. In C#, closures really are a poor man’s objects.)

You might be wondering when I’m going to bring up generators. Does your language have a yield keyword? Then it can do something very similar.

(In fact, I believe generators and async-await are isomorphic. I’ve got a bit of code floating around in some dark corner of my hard disc that implements a generator-style game loop using only async-await.)

Where was I? Oh, right. So with callbacks, promises, async-await, and generators, you ultimately end up taking your asynchronous function and smearing it out into a bunch of closures that live over in the heap.

Your function passes the outermost one into the runtime. When the event loop or IO operation is done, it invokes that function and you pick up where you left off. But that means everything above you also has to return. You still have to unwind the whole stack.

This is where the “red functions can only be called by red functions” rule comes from. You have to closurify the entire callstack all the way back to main() or the event handler.

Reified callstacks

But if you have threads (green- or OS-level), you don’t need to do that. You can just suspend the entire thread and hop straight back to the OS or event loop without having to return from all of those functions.

Go is the language that does this most beautifully in my opinion. As soon as you do any IO operation, it just parks that goroutine and resumes any other ones that aren’t blocked on IO.

If you look at the IO operations in the standard library, they seem synchronous. In other words, they just do work and then return a result when they are done. But it’s not that they’re synchronous in the sense that it would mean in JavaScript. Other Go code can run while one of these operations is pending. It’s that Go has eliminated the distinction between synchronous and asynchronous code.

Concurrency in Go is a facet of how you choose to model your program, and not a color seared into each function in the standard library. This means all of the pain of the five rules I mentioned above is completely and totally eliminated.

So, the next time you start telling me about some new hot language and how awesome its concurrency story is because it has asynchronous APIs, now you’ll know why I start grinding my teeth. Because it means you’re right back to red functions and blue ones.

Read the whole story
davehng
3047 days ago
reply
Share this story
Delete
1 public comment
Groxx
2824 days ago
reply
There's another color-spectrum that Go's making more difficult, which is "serial" vs "concurrent". By pushing things to be goroutine-able, they're inherently pushing everything to be concurrency-aware, which is programming hard a famously problem in.

But the post is *excellent*. Totally worth a read.
Silicon Valley, CA

How to find a bug in your code

2 Comments

So you’ve got a bug in your code? How in the world do you find the thing? Here are some of the techniques I use to identify the offending code.

Bisect differences

Was the bug introduced fairly recently? If so, you can jump back in the code to a time when the bug was known not to exist. Using the git bisect command, you can perform a binary search through your code base until you find the commit where the bug was introduced. Looking at that diff should give you a fairly small amount of code to dig through. This is a fantastic technique when you are working in a new or unfamiliar code base.

Explain the problem to a teammate

Speaking is thinking. Oftentimes, just forming the words to describe the bug to a teammate will lead you right to the source of the problem. A good colleague will ask you probing questions to help you think about the problem in a new way.

Signal Processing

I began my career programming signal processing microprocessors in assembly. (Fun, right?) In any assembly language it can be pretty hard to keep track of what was what (no named variables!). In signal processing, you get a data sample for a time period, transform it with a pipeline of algorithms, and pop out an output sample. When I would need to find a problem in a calculation, I had to step through the code, instruction by instruction, verifying the intermediate results in the registers along the way.

All code can be looked at like a signal processing chain. Something happens that then causes something else to happen, and then at some point you are expecting the correct output or side effect. To debug with this method, start at the beginning of your buggy scenario. As a mental exercise (or with a debugger or series of print statements), step through each method, function, or transformation one at a time until you find the problem. Granted, it may take you a long time if your problem is toward the end of your process, but you always know where you have been and where you are going.

Test-Driven Bug Finding

There are a lot of assumptions you have about your code. Hopefully, most things you know are covered by a test; but if you suspect a problem somewhere, try poking at it with a new test. Maybe there is an unconsidered corner case or some unwritten assumption that was violated.

It is easy to fall into the trap of guessing where the problem is, then looking there and not finding anything. Then, you jump over to somewhere else. You do that a couple of times and you have no idea where you’ve been, and you keep looking back at the same thing over and over. Try one of these techniques to keep track of where you've been and help you find the bug.

What more? Check out A Methodical Approach to Dealing With Bugs

Read the whole story
davehng
3049 days ago
reply
Good notes for new developers starting out!
Share this story
Delete
1 public comment
ChrisDL
3051 days ago
reply
basic but golden advice
New York
dreadhead
3044 days ago
Discussing a bug with a team mate is huge. Just explaining the problem to someone else often can make you catch some dumb mistake you have made.
ChrisDL
3044 days ago
agree to agree.

Baking a Cake with Rails

1 Share

In my last post, I cautioned against hastily reacting to the abundance of talks and blog posts describing migrations away from Rails monoliths. A preliminary question I didn't address is: why are there so many of these stories in the first place? Most authors and speakers explain that upon reaching a certain scale, their Rails monoliths became difficult to maintain, but why is that? There are many potential explanations, but in this post I argue one major reason this is so common is that Rails strongly encourages organizing code into horizontal layers at the expense of vertical slices.

Horizontal layers? Vertical slices? Tell me more about this delicious cake you must be describing.

Horizontal layers are the building blocks and general architectural components of your system. They stretch across most, if not all, domain concepts. The "tech stack" is a good place to start thinking about horizontal layers—a SQL database, backend JSON API, and JavaScript front-end could broadly be considered three distinct layers—but finer horizontal layers can be identified within application code. For example, controllers constitute a horizontal layer that deals with HTTP. Views are a layer responsible for HTML concerns. Models are the data persistence layer.

Hmmm... Well, would you look at that! It's MVC! Rails gives us these three horizontal layers out of the box, and as Rails's popularity attests, these are three very convenient starting layers. Prior to Rails, many web applications lumped multiple concerns into single PHP files. The strongly opinionated Ruby framework is a reaction to the difficulties of that architectural style, and indeed is a great alternative.

As a Rails application grows, the framework subtly encourages developers to create additional horizontal layers. Mailers, presenters, asynchronous jobs, and more are added to new, dedicated subdirectories beneath the top level app/ directory. We can make all sorts of horizontal layers, but there is an orthogonal abstraction to consider.

Cutting the cake

A vertical slice is a single path through the horizontal layers of the app. For example, an HTTP request is routed to a controller, which looks up data via a repository; the data is encapsulated in a model that gets passed to a presenter and rendered in an HTML view. The common element in a vertical slice is an action, and in particular one with specific domain meaning. In an e-commerce site, for example, vertical slices include creating accounts, finalizing orders, processing refunds, or activating promotions.

In most Rails apps, vertical slices are implicit—the paths exist in the code, but they are not immediately obvious from looking at the project directory like the horizontal layers under app/. In his post "Screaming Architecture", Uncle Bob describes some benefits of organizing code such that vertical slices are made explicit. Rather than glance at a codebase and exclaim, "Ah, a Rails app! I wonder what it does," one would rather say, "Ah, an e-commerce app! I wonder how it's implemented." The former looks like this:

app/
  controllers/
  mailers/
  models/
  views/

The latter looks like this:

lib/
  order_processing/
  promotions/
  refunds/
  user_registration/

Too tasty to be true?

Organizing a Rails app in the latter style is not impossible. There are several posts describing patterns that can be used in pursuit of this goal. I have seen a few codebases that use Rails while pursuing a vertically-focused project layout quite aggressively, and others that effectively implement a hybrid approach. They are rare, though, because Rails does nothing to actively encourage the definition and organization of vertical slices. There are a few possible reasons for this.

Inconsistency

A vertical approach goes against the grain of the established patterns that contributed so strongly to Rails's popularity in the first place. "Convention over configuration" proved to be extremely attractive to developers, and deviating from those conventions could introduce inconsistency and confusion.

Domain-specificity

All web frameworks need to provide a way to get from the low-level details of an HTTP request to high-level domain logic and back down to an HTTP response. The specific vertical paths from request to response, however, are defined by the business; they are what make your web app unique. When starting a new project, rails new only knows that you'll need some tools that abstract away HTTP details. Organizing these horizontally in separate directories is a good starting point, as it reinforces the fact that they have separate responsibilities.

There are a few frameworks that provide ways to prepare your project for a vertical-first layout from the outset. These concepts can be useful, but using them at the beginning of a project is also risky. In his book "Building Microservices," Sam Newman discusses how getting vertical "cuts" wrong can end up being more costly than not attempting those cuts in the first place. Newman writes in the context of migrating to microservices, but the same idea applies to long-term maintenance of a monolith. Sometimes vertical slices need to evolve organically over time from a codebase, rather than being strictly defined up front. As an extension of this idea, Newman recommends refactoring domain concepts vertically within a monolith before extracting them into separate services.

Autoloading

How often do you think about requiring files when working in Rails? The autoloading Rails provides is convenient at first, but makes almost everything globally available. Over time, this causes the package principles to be forgotten and abandoned. Disabling Rails's autoloading permanently probably isn't worth the inconvenience, but it could be interesting as a temporary exercise to assess package cohesion and coupling. Ideally the added require statements would follow consistent patterns and effectively delineate vertical slices. However, if autoloading has been abused, they may instead expose a tangled web of cross-cutting concerns.

Overemphasizing CRUD

Most Rails apps' routes files consist primarily of calls to the resources helper for almost every Active Record model in the project. By default this helper defines routes that map directly to CRUD database operations on a model via a controller sharing that model's name. This works well for pure CRUD apps, but establishes a model-first routing pattern that may not fit other problem domains well. Coupling HTTP requests to data persistence details can lead to unnecessary shoehorning of behavior into imprecise controllers and actions: if upgrading a user account requires creating a Payment and updating a User, should the request be routed to PaymentsController#create or UsersController#update? Using more vertical domain names avoids this problem altogether—it can be handled by UserAccountController#upgrade.

Rails does provide good support for defining additional RESTful routes beyond basic CRUD actions, so the action above could use resources to define POST /users/:user_id/upgrade_account. However, this still implies a data-model-first convention rather than a domain-behavior-first one and necessitates prioritizing one model over another. Furthermore, DHH recommends (00:50:19) sticking to the default actions and building controllers around those actions over adding arbitrary actions to existing controllers. This leads to needlessly overloading generic words like update instead of using words that more clearly describe the behavior.

Adjust the recipe to taste

I mentioned above that Rails's commitment to "convention over configuration" is a big part of its success. The layout provided by rails new is much more welcoming than an empty directory, and the strict naming conventions are reassuring. The "Rails Way" often seems like an absolute, all-or-nothing approach to web application development, but you can choose to use whichever parts of it you prefer. My personal preference is a hybrid approach featuring a traditional, horizontal Rails app/ directory and a vertical, domain-driven lib/ directory. You may find other patterns that work better for your codebase. However you proceed, staying disciplined and keeping a careful eye on how Rails (or any tool) affects the design of your code will help you make the right decisions for your codebase before it becomes unmanageable.

Read the whole story
davehng
3127 days ago
reply
Share this story
Delete
Next Page of Stories