Multiple Gemfiles, Multiple Ruby Versions, One Rails

As part of a new project, I’m trying to run an app with several different Ruby versions and several different gem configurations. That second part is important because, for instance, you want a different version of the Psych YAML parser for older Rubies, and a version of Turbolinks that doesn’t hit a “private include” bug, and so on.

For those of you that know my current big benchmark, you know I try to keep the same version of Rails and multiple Ruby versions to measure Ruby optimizations. This new project will be similar that way.

So, uh… How do you do that whole “multiple Gemfiles, multiple Rubies” thing at this point?

Let’s look at the options.

Lovely, Lovely Tools

For relative simplicity, there’s the Bundler by itself. It turns out that you can use the BUNDLE_GEMFILE environment variable to tell it where to find its Gemfile. It will then add “.lock” to the end to use as the Gemfile.lock name. That’s okay. For multiple Gemfiles, you can create a whole bunch individually and create lockfiles for them individually. (There’s also a worse variation where you have a bunch of directories and each just has a ‘Gemfile.’ I don’t recommend it.)

Also using Bundler by itself, there’s the Gemfile.common method. The idea is that you have a “shared” set of dependencies in a Gemfile.common file, and each of your Gemfiles calls eval_gemfile “Gemfile.common”. If you want to vary a gem across Gemfiles, pull it out of Gemfile.common and put it into every individual Gemfile. The bootboot gem explains this one in its docs.

Speaking of which, there’s the BootBoot gem from Shopify. It’s designed around trying out new dependencies in an alternate Gemfile, called Gemfile.next, so you can see what breaks and what needs fixing.

There’s a gem called appraisal, far more complicated in interface than BootBoot, that promises a lot more functionality. It’s from ThoughtBot, and seems primarily designed around Rails upgrades and trying out new sets of gems for different Rails versions.

And that was what I could find that looked promising for my use case. Let’s look at them individually, shall we?

My Setup

The basic thing I want to do is have a bunch of Gemfiles with names like Gemfile.2.0.0-p648 and Gemfile.2.4.5. I could even make do with just one Gemfile that checked the Ruby version as long as it could have separate Gemfile.lock versions.

But I’m setting up a nice simple Rails app to check relative speed of different Ruby versions. As a side note, did you know that Rails 4.2 supports a wide variety of Rubies, from 2.0-series all the way up to 2.6? It does, at least so far for me. I’m specifically thinking of Rails 4.2.11, though you can find lots of nice partial compatibility matrices if you need something specific. And the upper bounds aren’t a guarantee, so Rails 4.2.11 is working fine with Ruby 2.5.3 at the moment, for instance.

So: let’s see what does what I want.

BootBoot

This one was easy for me to try out… but not for useful reasons. It only supports two Gemfiles, not a variety for different Rubies. So it’s not the tool for me. On the flip side, it’s very well documented and simple. So if this is what you want (current Gemfile, future speculative Gemfile) it seems like it would work really well.

But pretty obviously, it doesn’t do what I want for this project.

Appraisal

I got a lot farther testing this one. Appraisal allows a number of different Gemfiles (good) and overriding gems that are in the Gemfile (very good!).

You wind up with a bit of a cumbersome command line interface because you have to specify which appraisal (i.e. variation) you want for each command. But that’s not a huge deal.

And I loved that you could put the differences into multiple blocks in the same file, so you could really easily see that, e.g. Ruby 2.0.0 needed a specific Psych version, while all earlier Rubies needed an earlier Turbolinks.

The dealbreaker with Appraisal, for me, is that you can’t specify a specific variation when you install the gems. It needs to look through all the appraisals at once and install them all at once. It’s fast, so that’s no problem. But that means I can’t specify a different Ruby version for the different variations, and that’s the whole reason I’m doing this.

If you were varying a different gem version (e.g. Rails,) appraisal is a really interesting possibility. It has some capabilities that nothing else here has like overriding gems that are in the shared Gemfile - nothing else here can do that. But having to do all its calculations about what to install in a single command makes it harder to use it for multiple Ruby executables — such as multiple CRuby versions, or CRuby versus JRuby.

What Did I Wind Up With?

Having tried and failed with the more interesting tools, let’s look at the approach I actually used - Gemfile.common. It’s good, it’s simple, it does exactly what I want.

Here’s an example of me using it to install gems for Ruby 2.4.5 and then run a Rails server:

BUNDLE_GEMFILE=Gemfile.2.4.5 bundle install
BUNDLE_GEMFILE=Gemfile.2.4.5 rails server -p 4321


It’s pretty straightforward as an interface, if a little bit verbose. Luckily I’m usually calling it from a script in a big loop, so I don’t have to manually type it much. You can also export the variable BUNDLE_GEMFILE, but that’s not a good idea in my specific case.

Here’s one of the version Gemfiles:

ruby "2.3.8"

eval_gemfile "Gemfile.common"

As you can see, it doesn’t even have a “source” for RubyGems. More to the point, any line needs to be in Gemfile.common or Gemfile.<version> but it cannot be in both. The degenerate form of this is to just have a bunch of separate Gemfiles and update them all every time anything changes, which I try to avoid.

You can also put in an extra gem or two if needed:

# Gemfile.2.0.0-p648
gem "psych", "=2.2.4"
ruby "2.0.0"

eval_gemfile "Gemfile.common"  # must not contain gem "psych" or ruby version!

So that’s pretty straightforward. After I run Bundler I get versioned Gemfile.lock files. And of course I check them in - they’re Gemfile.lock, after all.

Does That Mean Gemfile.lock Tools Are Always Bad?

Not at all! I’d say there are two big takeaways here.

One: at this point, Bundler does a lot of what you want it to do. It has better support for Platforms, and BUNDLE_GEMFILE is a powerful, versatile tool. So for simple or unusual cases, Bundler is a good tool to do this.

Two: various tools for this tend to be specific, not general. Appraisal is great for what it does. BootBoot is a specific, simple tool for a common use case. But neither one is designed for random use cases, even random “I want more than one Gemfile.lock” use cases. For that, the Bundler is your go-to common denominator.

How Fast is Ruby 2.6.0preview3 for Discourse?

As many of you here know, I speed-check a lot of Ruby versions. I use a big benchmark called Rails Ruby Bench - basically, it sets up a highly-concurrent Discourse/Puma instance with 10 processes and 60 threads, then runs a lot of repeatably-random-generated HTTP requests through it, and times the results.

The idea is that it’s a “real world” Rails benchmark. When somebody says “Ruby version so-and-so gives 50% better performance on this new microbenchmark!”, the response always starts with, “yeah, but what does that mean in terms of real-world Rails performance?” I like to think the Rails Ruby Bench results are a pretty good indicator. So let’s see how well 2.6 does compared to 2.5, shall we?

What Changed?

First off, why would we expect 2.6 to be any different at all? What changed?

For starters, Aaron Patterson made a couple of memory changes, which save Rubyists some bytes. In Rails Ruby Bench, memory savings sometimes turn into speedups due to how garbage collection and caches work. It will always use “all the memory” (the EC2 instance’s full allotment,) but often it’ll go faster if it has more memory to spare.

Koichi Sasada also wrote some patches to use a “transient heap” for certain kinds of variable creation in order to speed up creating and destroying short-lived objects. HTTP requests tends to have a lot of short-lived objects.

As always, there are lots of random minor speedups, which is basically true on every release. But many of them won’t make any difference in a big Rails app, which is rarely CPU-bound. We’ll evaluate them collectively. But OptCarrot is often a better choice to see how much they help.

What Didn’t?

There’s one major change in 2.6 that needs a disclaimer: JIT. If you’ve been following 2.6 at all, you know that it includes the brand-new JIT implementation called MJIT… which is off by default and has to be manually turned on.

In the case of Rails, you shouldn’t turn it on. It slows things down rather than speeding them up. Basically, JIT isn’t finished and Rails is too big for the current implementation to handle it well.

Now: we’re totally going to speed-test it anyway, because have you met me? I like graphs. There will be another line on the graph. Of course there will. However, you should expect it to be worse, not better, than the 2.6 line with no JIT.

The question is mostly about how close the JIT line is to the non-JIT line, because that may tell us how close we are to JIT catching up and surpassing non-JIT (fully interpreted) Rails code. After it does that, it can start to make optimizations.

There are some other really interesting Ruby 3x3 changes that aren’t here yet - I think of Koichi’s Guilds implementation, for instance. We won’t be testing that.

Results and Versions

I found several bugs while doing this - that happens with prerelease Rubies. RVM doesn’t yet support mounting 2.6.0 preview3 from a local build properly since it’s not putting all the right stubs in place. You can install it just fine with “rvm install 2.6.0-preview3”, which is all most people want anyway. I just install Rubies in a weird way.

And it turns out that Ruby 2.6.0preview3 has a periodic interpreter internal error (think: segmentation fault) with JIT and Discourse, which made it difficult to test with JIT. More on that later.

So I wound up also testing a slightly earlier Ruby head-of-master version with a less-severe interpreter error. It’s marked as “pre-bundler” here because it’s before the Bundler was merged in, which also let it be installed from source and mounted by RVM. I found a way to not stop the test for that, but the “pre-bundler” JIT numbers are pretty suspect and have a higher variance than I’d like. Don’t take them as gospel.

What is this testing? Each of these tests is about half a million HTTP requests, divided among 50 batches. Each 10,000 requests in a batch is divided between 30 load-tester threads (around 333 req/thread) and processed by cluster-mode Puma running with 60 threads in 10 processes.

Yeah, okay, but how fast is it? Let’s have some graphs!

This was my first graphed version: I started up two consecutive EC2 instances, and tested the “pre_bundler” commit and (on the second instance) 2.6pre3. What I saw was… odd.

Screen Shot 2018-11-29 at 10.35.15 AM.png

So what’s going on here? Well, the 2.5/2.6 story is a bit strange, but mostly they coincide. We’ll look at that more later. The JIT/no-JIT story is more interesting to me.

That yellow line with the weird slowdowns for most runs? That’s the pre-bundler JIT branch trying to handle RRB. You know what that looks like to me? It looks like Takashi, as he works on JIT, is fixing problems… and he’s only fixed all the problems for 45% of the runs. The other 55% of runs have at least one, and sometimes multiple, big weird slowdowns still. If you just look at the final numbers, you wind up believing that JIT is about 10% slower than non-JIT for Discourse. Which is, overall, true. But that graph up there shows that it’s not a simple 10% slower. Over half of all total runs are basically as fast with JIT as without it. A few are faster. But then the slow ones are often much slower, up to around twice as slow for the whole run combined.

(To answer your question in advance: I do not currently know what the slow runs have in common that the fast ones do not, and I should study that. Weirdly, the answer does not seem to be “a small number of really slow requests.” The single-request graph is surprisingly boring, so I’ve omitted it. So on some runs, all the requests seem to be somewhat slower. Is any of this related to that weird segfault? Maybe. The way it manifested kept me from keeping accurate records of exactly where it happened, which doesn’t help.)

The bottom orange line is the one to compare the JIT line to - it’s also the “pre-bundle” Ruby, not the released 2.6.0 preview 3. It seems to have had a weird slowdown as well, that only affected the top 4% or so of requests.

Also, keep in mind that the bottom three lines (everything but 2.6 pre-bundler w/ JIT) are much closer together. That Y axis starts around 45, not zero, so every run there is between about 45 seconds and 60 seconds to process a few hundred HTTP requests.

The lines for 2.5.0 and 2.6.0pre3 are much smoother - I don’t see any weird slowdowns. That makes sense. They’re polished Ruby releases and they’ve gotten a lot more love than random commits from head-of-master.

Okay, but why is 2.5 (arguably) faster than 2.6? I mean, that’s weird, right?

Yeah, But It’s a Statistical Artifact

There are two things going on with the lines for 2.5.0 versus 2.6 preview3.

One: check the Y axis. Those lines are actually very close together, well within 10% and usually much closer. It’s a small difference that looks big on that graph.

Two: I used two different EC2 instances. Yes, they were both dedicated m4.x2large instances, which give shockingly steady results over time in my experience. However, they do not give equally steady results across all instances, though it’s still not bad. The graph above is combining two different instances’ results for 2.5.0 (only) and comparing it with results from only one batch of 2.6.0 pre3. So let’s look at only one instance’s 2.5-versus-2.6pre3 results, filtering out the other instance’s results.

This is what that looks like:

Screen Shot 2018-11-29 at 10.51.51 AM.png

It tells a very different story, doesn’t it? This basically says “2.6.0pre3” is very slightly faster, across basically all requests. That difference is within a single standard deviation on my test so I’m not going to give you a percentage - it would be a very small one. But it does look consistently (very) slightly faster rather than slower, so that’s probably good.

Why no 2.6-pre3-with-JIT line? That would be the nastier version of the segfault, the one that kept me from successfully measuring with-JIT performance at all for that Ruby version on that instance. I tried measuring it separately, ignoring all runs with segfaults, and got… weird results. Given that we know JIT isn’t supposed to be good for Rails at this point, I’m going to stop speculating on exactly what’s going on there. But the results are a bit weird, even if you cut out the ones with internal errors in the interpreter.

Takeaways

Short version: 2.6.0pre3 is maybe a little faster than 2.5.0. But for big Rails apps, that difference is somewhere between “barely detectable” and “undetectable” without JIT. The impressive thing will be when JIT gets good enough to make a real difference for larger applications. And we’re not there yet. In fact right now for this prerelease, JIT is a bit unstable.

A little memory savings and a few miscellaneous fixes have happened. But it’s hard to make a big I/O-bound Rails app faster. Ruby’s improving, but it can’t work miracles.

With that said, your old Rails apps are still getting faster, and that will keep happening. Every year is a little extra free performance boost. This year may be a performance boost for only your small apps, not your big Rails apps.

Bundler is Built Into Ruby 2.6.0preview3

Big Bundler changes are coming for Ruby 2.6 preview 3. No, not the really huge RubyGems 3 and 4 changes. Also not the Bundler 2.0 changes where Gemfile changes name to gems.rb. Those are still in the future, mostly.

No, preview3 is where Bundler got merged into Ruby proper. They’ve been working on it for awhile. When you build the Ruby source code, you get a Bundler executable right inside Ruby. It’s a lot like rdoc or irb now. It can also have better integration with RubyGems, which has been part of core Ruby since Ruby 1.9.

You could be joining your relatives to eat Thanksgiving leftovers as you listen to Uncle Bob’s highly-political take on the US President tear-gassing asylum seekers, or you could be reading arcane Bundler minutiae. I think we both know which is more pleasant.

Some Early Bugs

As you’d expect, there are a few bugs to iron out. While rvm merged a PR for preview3, they didn’t change any of their stubs, which may be interesting in the long term - do you need to install a Bundler gem even though Bundler is built-in? What if they’re different versions?

Rbenv and Ruby-build also allow installing it — I’m not sure they need any special treatment for it, given the way they’re different from rvm.

(For either RVM or ruby-build, you may need to update before installing the new one. That's how you usually make new versions available. And this is a very new version.)

It’s also possible to hardcode some of your Bundler assumptions in a way that gets broken (as, ahem, I do in Rails Ruby Bench.)

Also, their new version numbers seem to be things like “2.a”? Some of the changes are a bit confusing to me.

In general, this is a great time to try upgrading to 2.6.0 preview3 and see whether you see any Bundler-related breakage.

Installing

I use MacOS on my personal laptop, so I’m going through the joys of getting Ruby to use Homebrew’s OpenSSL 1.1 with 1.0 still installed. In other words, mostly it breaks. Here’s what I do with RVM:

rvm install 2.6.0-preview3 -C --with-openssl-dir="$(brew --prefix openssl)"

You can do something very similar in Ruby-Build or rbenv:

RUBY_CONFIGURE_OPTS=--with-openssl-dir="$(brew --prefix openssl)"

Right now RVM insists on installing OpenSSL 1.1, but Ruby doesn’t seem to build with it. OpenSSL has been a giant pain for MacOS (among others) for as long as I can remember. So, y’know, whatever your current coping strategy is, you can use it here too. Mine is only whining in blog posts because I don’t do whiskey.

What It’s Do?

Having both built-in and installed Bundler is likely to be weird… And also very, very common. Below is what I saw when I did that.

First, I built Ruby 2.6.0preview3, which has the new built-in Bundler. I checked the version - and it worked. Yay!

C02RP0G1G8WM:opt noah.gibbs$ bundle --version
Bundler version 1.17.1

Okay, so what if I install Bundler as a gem? And with a lower version?

C02RP0G1G8WM:opt noah.gibbs$ gem install bundler -v 1.17.0
Fetching bundler-1.17.0.gem
Successfully installed bundler-1.17.0
Parsing documentation for bundler-1.17.0
Installing ri documentation for bundler-1.17.0
Done installing documentation for bundler after 2 seconds
1 gem installed
C02RP0G1G8WM:opt noah.gibbs$ bundle --version
Bundler version 1.17.0

Interesting! The lower-version gem takes precedence. Presumably that’s due to how RVM and paths work. Can we use the Bundler version-specific stub hack to use a specific version?

C02RP0G1G8WM:opt noah.gibbs$ bundle _1.17.0_ --version
Bundler version 1.17.0
C02RP0G1G8WM:opt noah.gibbs$ bundle _1.17.1_ --version
Traceback (most recent call last):
	2: from /Users/noah.gibbs/.rvm/gems/ruby-2.6.0-preview3/bin/bundle:23:in `
' 1: from /Users/noah.gibbs/.rvm/rubies/ruby-2.6.0-preview3/lib/ruby/2.6.0/rubygems.rb:307:in `activate_bin_path' /Users/noah.gibbs/.rvm/rubies/ruby-2.6.0-preview3/lib/ruby/2.6.0/rubygems.rb:288:in `find_spec_for_exe': can't find gem bundler (= 1.17.1) with executable bundle (Gem::GemNotFoundException)

Basically yeah, we can. The built-in 1.17.1 wasn’t installed the same way, and so it doesn’t have a version-specific stub. It’s not a gem, it’s a built-into-Ruby executable like irb or rdoc. And in fact, if we specifically use that binary, we get the built-into-Ruby version of Bundler, not the gem:

C02RP0G1G8WM:opt noah.gibbs$ /Users/noah.gibbs/.rvm/rubies/ruby-2.6.0-preview3/bin/bundle --version
Bundler version 1.17.1

That makes sense.

Does it work if I uninstall the gem?

C02RP0G1G8WM:opt noah.gibbs$ gem uninstall bundler
Remove executables:
	bundle, bundler

in addition to the gem? [Yn]  Y
Removing bundle
Removing bundler
Successfully uninstalled bundler-1.17.0
C02RP0G1G8WM:opt noah.gibbs$ bundler --version
Bundler version 1.17.1

Looks good!

What’s My Takeaway?

In no particular order:

  • As of Ruby 2.6.0preview3, Bundler is part of core Ruby

  • You can still install Bundler as a gem, and it basically works

  • If you have a nice new Bundler but it’s getting an old one instead, uninstall the old gem

  • There will be Bundler bugs in the new year - this change is a good first place to look

  • Instead of joining your family for holiday conversation, consider testing code with new Ruby, reading lots of my old blog posts, getting unsociably drunk or really anything else… Enjoy the news responsibly and in moderation, though.

Have a joyous holiday season, for a holiday of your choice! I wish you many new Bundler and Ruby features in the coming year.

Ruby 3x3 and RubyConf Los Angeles

I’m fresh back from RubyConf in Los Angeles. And Keep Ruby Weird. Also, barely returned from, and just wrote an article about, RubyConf Malaysia. Have I mentioned that I’ll be traveling a lot this next year, too?

I’ve just talked to a lot of Rubyists. I’ve learned a few things, including about Ruby 3x3. Let’s talk about where that is, shall we?

Ruby Speed

I continue to update and run Rails Ruby Bench. Speed is one of my big interests. Let’s talk about how Ruby’s speed is doing.

The JIT’s good and getting better. Takashi Kokubun keeps working on it constantly. It’ll be in the Ruby 2.6.0 Christmas release, and it was also in all the recent Ruby preview releases. Method inlining, one of the big speed benefits of JIT, is nearly here! It wasn’t in preview3, but it sounds like it’ll be there for Christmas. You can see more about the current state of Ruby 2.6 JIT in Takashi’s slides from RubyConf (current as of Nov 2018.)

Wondering if Ruby 3x3 will actually be three times faster than Ruby 2.0? Those same slides put OptCarrot at 2.53x faster with the current changes. I think we’ll make it to 3x!

The major JIT disclaimer is Rails. Currently Ruby 2.6 JIT makes Rails slower. Takashi has been working on it, but there are some hard problems there. He’s also collecting other benchmarks where JIT makes code slower to fix similar cases.

Progress has been good. I learned from Charles Nutter and Tom Enebo’s RubyConf presentation that for a simple “just CRUD with scaffolded actions” Rails app, 2.6 with JIT is very nearly the same speed as 2.6 without JIT. So Takashi’s work has helped, it’s just not quite there yet.

(Not as fast as JRuby, though. Those folks are constantly optimizing. When the recording of their RubyConf talk goes up, watch it.)

There have also been other speedups in 2.6, of course. Aaron Patterson continues to work hard on the memory system, including a couple of changes in Preview3 that reduce memory usage. For memory-limited scenarios like Rails Ruby Bench, that translates into extra speed - you should see a benchmark from me soon with the latest numbers for the Ruby 2.6 prerelease.

(Unrelated: did I mention that the RubyConf venue was kind of a palace? It was. The Millennium Biltmore in Los Angeles, if you want to look it up.)

How Much Do We Need Speed?

RubyConf is a great chance to survey Ruby folks and see where lack of speed is biting them. Not only did I go down the row of vendor companies asking, I asked a lot of random Rubyists. I also asked on Twitter. The answers surprised me a bit.

Short version: other than Rails, it looks like Ruby is mostly fast enough. Nobody complains about new speedups, but… Ruby just isn’t slowing people down, day-to-day. It has a reputation as slow. There may be people who don’t use Ruby for some speed-related reason. But existing Rubyists, who use Ruby now, don’t seem to hit speed problems with it.

Note that, again, I said “other than Rails.” That’s a large caveat.

Concurrency

Koichi Sasada has been working on Guilds for awhile, and has had several talks on his prototype implementation. The idea is neat. I’d love to see the GIL limitations broken without screwing up threading for new programmers. This is a feature that would primarily benefit concurrent and Rails performance, which is nice.

The code is finally available! You can also read more in his RubyConf slides.

Unfortunately, the performance isn’t great yet. This isn’t going to give multiprocess Rails a run for its money, performance-wise… yet. Koichi is looking at some of the tradeoffs of the current design. And Guilds aren’t likely to ship in Ruby 2.6 this Christmas. The design isn’t fully nailed down, and there are still some bugs in the current implementation.

But we have a current implementation to play with. Progress!

(My bear is named the Super Princess. She makes friends easily. Now you know!)

Type Checking

Matz has been talking about gradual typing in Ruby for awhile here - it’s one of the three big pillars of Ruby 3x3, along with concurrency and performance.

Like concurrency, there’s still some design in progress. They’re still having design meetings about it and refining plans. There have been several early prototype implementations of different designs, and they’re still at it. The “TypeDB” ideas from this year’s RubyKaigi sounded promising.

Matz’s big design goal here is that no new type information will be required, but new tooling can find more bugs. A TypeScript-style additional type file is likely, but will always be optional. And Matz really hates type annotations, so those aren’t going to happen.

At RubyKaigi in May they were talking about a “Type Database” file that would collect different type information from different tools - for instance, you could run unit tests in a special mode that would record what types each method took. And you could run YARD or similar docs tools to add the documented types to the database. And you could run a static analysis tool and see what it could tell about the appropriate types for each call site. In Ruby, each of these methods is limited. But all of them together can find a lot of bugs.

The tooling for this is all early prototypes as far as I know. I can’t even provide you a link - I’ve only seen it mentioned briefly in talks.

Yeah, But When?

In the Q&A with Matz at the end of RubyConf, he said to expect Ruby 3x3 around Christmas of 2020. I think we’ll be three times faster before that point - probably well before Christmas of 2019, at the current rate. The 2020 release will likely have Guilds in some form, but I wouldn’t be surprised if they’re still a little rough. And there will be some form of type checking tool. It’s hard to be sure what that will look like for that release, though.

In practice, you can expect all of these features to arrive a little at a time. Performance is nearly here. Guilds are here but rough. Types are barely here at all. And all of these will keep slowly improving.

RubyConf Malaysia and Getting the Most from a Distant Conference

I had a great trip to RubyConf MY — thanks for inviting me, Tevanraj! I won’t subject you all to a travel post about Kuala Lumpur, even though it’s awesome. I will talk about some interesting Ruby and development stuff from the conference and before. I’ll also talk about how to get more from a conference far from home.

 Kuala Lumpur is a city of gigantic, awe-inspiring buildings. The constant construction goes way above where most cities stop, vertically speaking. Also, click on any  other  image in this post for bigger pics.

Kuala Lumpur is a city of gigantic, awe-inspiring buildings. The constant construction goes way above where most cities stop, vertically speaking. Also, click on any other image in this post for bigger pics.

Showing Up Early Can Be a Great Way to Meet Developers

If you show up early or stay late, you can potentially meet speakers, locals, expats and so on. Michael Kohl was briefly in Malaysia rather than Thailand, and I got to hang out with him a bit before the conference. Thanks, Michael! We talked about how his consultancy uses Rails in slightly different ways than the standard out-of-the-box experience, a bit better for experts and a bit less beginner-friendly. It makes sense that we all evolve a style over time, and he had some interesting ideas about how to scale Rails to more complex use cases. If you get a chance to talk to him, maybe online, he has some great ideas about that!

I made it to one local Malaysian meetup. It was a DevOps meetup rather than specifically Ruby, but still a great experience. MindValley sponsors a lot of local development stuff, including RubyConf MY and hosting a lot of different meetups - so going to a meetup there was a great way to see an important piece of the local tech scene.

A neat thing about going: seeing what’s the same and what what’s different. Everybody was talking about the same cloud providers (AWS, Google Compute, Azure) in slightly different amounts (more Azure, very little Google.) The talks were about the same technologies we’re excited about in California (Kubernetes, serverless including Lambda.) More people were working at freelance, consulting and contracting companies, and a lot fewer at tiny semi-insane product companies. I feel pretty confident about our local speakers and SREs — California deserves a lot of its reputation, you know? But the communities aren’t so different.

And the venue could have been in Mountain View or SF. A huge floor of beanbag chairs, surrounded by Avengers models and toys, with a list of company values on the wall that could have been from a California startup just as easily.

If you head to a conference far from home, there’s a lot to be understood. Partly, it’s worth taking some extra time to look around and talk to people. At first, you don’t even know what to look for, so just look around.

Conference Culture Varies, Too

Malaysia was an interesting conference experience: it was hard to get the attendees to talk much to the speakers.

It was mostly a deference thing, I think. I found a guy to talk to the morning of day one (hi, Joe!) and once he figured out I was a speaker, he got a lot quieter. He also thought it was very weird that I was just sitting somewhere in the audience, not right up front in a special section. I said that in the big US conferences, the up-front section was reserved for new folks and people guiding them, not as a high-prestige speakers-and-VIPs section. He seemed to think that was pretty weird too.

RubyKaigi in Japan had some of that deference, too, though not as much as Malaysia. One thing I love about US Ruby conferences is just how clear we make it: new people are our lifeblood, and we need them desperately. That’s… not everybody’s take on it. That makes sense. I don’t feel like most languages, libraries, etc are great about it either. Ruby makes very real technical sacrifices for new-folk-friendliness. It’s hard to imagine, say, C++ doing that.

So: if you’re going to a conference outside the US, or that might otherwise not be run like our main national conferences, it may be worth encouraging folks to talk to you (if you’re a speaker) or to talk to the speakers (if you’re not.) A lot of what people get from conferences is what can’t easily be recorded.

Local Flavor

One of the cool things about a distant conference: you’ll see speakers you otherwise wouldn’t. In Malaysia, the attendees were a pretty even mix of Malaysian, Indonesian and Vietnamese, plus a smaller number of others. The speakers had some classics and well-known speakers from the US circuit (e.g. Aaron Patterson, AllieP, Britt Martin, Nadia Odunayo) but also folks I don’t see as much. Ajey Gore from Go-Jek in Indonesia gave an amazing ending keynote. Janice Shiu of MindValley gave a great talk about algorithmic poetry and pronunciation. Ankita Gupta and Sean Nguyen’s talk on GraphQL was great. I could have seen some of these folks elsewhere — Ankita has spoken at RedDotRubyConf in Singapore, for instance. But it turns out there’s a lot going on that I’d miss. And hey, I haven’t made it to Singapore yet.

And since they’re at the conference, you can sit around and chat with excellent people you wouldn’t otherwise meet. I talked to Ajey a fair bit before his keynote and he’s great company. I like anybody who can talk both tech and business fluently, and he’s a powerhouse in both. But if you introduce yourself to conference folks (and you should!) then you’ll meet awesome people you otherwise wouldn’t… especially if it’s not a conference that’s local to you.

It’s also an excuse to do other stuff. I was lucky enough to be invited to see the Batu Caves with a couple of the other speakers. But when I’m just driving down to Los Angeles, it’s hard to convince myself to see tourist stuff. It’s not far away, you know? The farther from home you are, the better a reason to say “hey, let’s see that fun thing in the tourist guides.” Kuala Lumpur has a wide variety of beautiful places. But so do most cities that would host a conference, you know?

A Distant Conference is Still a Conference

A lot of what you get from being at a Ruby conference is talking to people and kick of inspiration. Great talks are great, but they get recorded. Great technical information is already in thousands of blog posts you can Google.

You may notice, basically, that this post is “go talk to people, go talk to people, go talk to people.”

For an excellent conference experience at any conference anywhere, may I recommend going and talking to people? It’s a good thing.

QA at Appfolio

TLDR: An ongoing exercise of preventing issues before they become bugs.

Depending on your prior experience with Software Quality Assurance, your perception about what QA is responsible for in the Software Development Life Cycle might range from “What is QA?” to “They are testers”. When it comes to asking the question “at what point of the SDLC does QA get involved?”, all too often organizations rely on the “ready for QA” mindset to dictate when their QA team member is thought of or brought into the mix.  At times, QA might be added at the end of the process -- like frosting on a cake. But, that cake might have questionable cake interior under that questionable frosting!

questionmarkcake (1).jpg

Here at AppFolio, we view QA a little differently. Okay, maybe a lot differently. Our goal is to ensure Quality throughout the process; baking Quality into the cake. I love quality cake!

Allow me to share a bit of our approach to Quality Assurance with you. Hummm… where to start?!?

We’ll start with context.  Our teams own the challenges that they are responsible for and, as such, drive the processes of gaining the insight needed to understand the domain and areas of influence for a given problem.  We are working together to come up with the right solution. We are collectively working to build the implementation. We plan and manage the release of the solution to our customers. Additionally, we also help support the solution and monitor the solution’s success. All, collectively as a team.

payinf.JPG

Having QA members as active participants on the team gives them insight into every aspect of the process and challenge, thus enabling them to ask the right questions early. By bringing their perspectives and thoughts to the team as soon as possible, QA members help to prevent issues before they become bugs.

One very significant engineering practice that differentiates Appfolio from other software companies and frees QA up to being able to focus on preventing bugs, is that our Developers are responsible for our Test Automation Framework and writing our Automated Tests. Our framework and tests are just as important to us as our production code; it is not an after thought or second class, this is why our engineers with the most expertise in coding are tasked with this responsibility. We will go into more detail on this topic in a future blog post.

We believe that quality starts with the team and is owned by the team. Even though the whole team owns quality, the QA Engineer is the champion of Quality, with a lens focused on:

  • Have our QA Mindset of: Focus on preventing bugs, not just finding them.

  • Identify risks, alert the team and ensure the risks are addressed in some capacity.

  • Exploratory Test as soon as possible, as it makes sense.

  • Look for opportunities to apply the above, from the beginning to the end and every spot in between.

team12.jpg

A distinction to point out is that our QA members are engineers, not Testers. Testing is a tool, or many different tools, depending on how specific you want to get. It’s one of the tools we have in our tool box. We hire people to learn, be creative, and use the different tools at their disposal for the right situation. You don’t call a person who knows how to use a hammer and uses it for the appropriate job a “Hammerer”, you call them a Carpenter. From an engineering perspective, in order to contribute to creating software, building healthy teams, and improving processes, we are required to learn and be familiar with different tools, technologies and techniques that span the realm of technology, process, and team building. A learning mindset and reflection is needed to build up and maintain our skills so that we can be effective at positively impacting our spheres of influence.

losd.JPG

Which brings us to one of our most important guiding principles: Make those around you successful and you will be successful. As your sphere of influence grows, so does your impact in helping to make people around you successful. This guiding principle is so important to us that it is also used as a measurement for career advancement on the QA team. To a degree, we view QA as a service job:  serving the developer, team, and customer. Asking ourselves, “How can I make this easier, better, more productive?”

One important example of how we go about ensuring the success of the team is the importance we place on our QAE’s ability to understand and work on interpersonal relationships on the team. The intent is to work towards Psychological Safety on the team. We need to build good relationships with our teammates in order to help make sure they feel “safe” and “heard”; constantly gauging the team's health.

team13.jpg

QA is still responsible for testing deliverables and because of this, there is a natural force which pulls QA’s focus towards the testing part of the process. I call this Test Gravity. The more “testing” of deliverables a QA engineer has at one time and/or the greater the “testing” effort, the greater the gravitational pull on the focus of the QA engineer to the “testing” phase and away from the rest of the process. If the QA engineers do not manage how many deliverables they need to test, the amount of effort and the timing, they will end up in a state of being reactive and could fall behind in the team’s work.  An effort will need to be made to catch back up with the team. Thus, it’s imperative for the QA engineer to be aware of Test Gravity -- to account for it, proactively mitigate it, and have strategies to handle it.

In summary, QA is an ongoing exercise of preventing issues before they become bugs and trying to help make those around you successful. In order to achieve this outcome, QA is required to gain knowledge, stay ahead of the upcoming work, and be an active participant on the team throughout the SDLC. QA at AppFolio requires a hungry curiosity, an appetite for learning, craving to be creative, and a desire to quench your analytical thinking on challenges that span the breadth and depth of software product engineering.  Mmmm… Quality cake. :)

 thanks for the cake Gary!

thanks for the cake Gary!



Traveling Ruby Conversation!

Hey, Rubyists!

I’m going to be traveling as I work for awhile. So expect announcements of interesting locations where I may be found. If you’d like to talk Ruby, I’m interested!

I’ll also try to contact local meetups in my various locations - I’d love to give talks or just hang out with the local folks!

I’m currently in Anaheim, California. Let me know if you’re nearby!

And I’ll be in Malaysia for RubyConf MY from October 15th-30th. Do I have any Malaysian readers?

Ruby Method Lookup, RubyVM.stat and Global State

In Ruby, it can sometimes be a bit involved figuring out where a constant or a method comes from - not just for you, but for Ruby, too! When you call "foo," is it foo from your current class? One of its parents? A module included from a parent class? Somewhere else?

And of course, Ruby has a few objects whose methods are available just about everywhere - Object, BasicObject and Kernel.

When I wrote about the Global Method Cache, we touched on that just a little. Let's go a bit deeper, shall we?

Better yet, we'll learn about some things that you should carefully *not* do to keep your performance good, and how you check.

Method Caching and Cache Invalidation

Methods on three specific Ruby objects are special - BasicObject, Object and Kernel. BasicObject is the root of the inheritance tree - even the simplest Ruby objects inherit from it. Object is slightly more full-featured than BasicObject, but it's basically the same thing - it includes a module called Kernel, which gives it a lot of methods. And Object is the default for inheritance, so nearly every object inherits from it. As a result, every object inherits from BasicObject and nearly every object inherits from Object, and includes Kernel. Also, any “top level” definitions are definitions on Object (thanks to Benoit Daloze for correcting my original explanation!)

As a result, they get special hardcoding in Ruby's performance code. Ruby keeps a count of how many times you have defined methods on those three objects. It calls that count your “Global Method State.” And it tags all method cache entries with that number. Which means that if you add a new method on any of them, all the old method cache entries get invalidated...

That's good! You don't want to use stale entries that would point to the wrong method definition.

Also, that's bad. All your old cache entries are gone, and now you have to look them up again. That can be slow.

If it happens during the setup at the beginning of your Rails app... Well, sure. That sounds like Rails, defining methods in all sorts of places. Any cache you set up before it's done with that is wasted, sure. Sounds like Rails. Also, sounds fine.

But if you do that in your inner loop, that's really bad. It means you're looking up all your methods fresh, every time, even methods that don't share a name with the method you're defining on Object (or Kernel, or BasicObject.) That's no good.

Aaron Patterson has also written more about Global Method State and its relatives, if you want more details.

Constants and Cache Invalidation: Global Constant State

Ruby has really involved constant lookup. In the same way as for global methods, making a new constant makes it hard to tell exactly what constant to use. Ruby has a really similar solution.

When you define a new constant, it invalidates your old constant caching - Ruby has to look everything up again. The state number Ruby uses to track this is called the Global Constant State.

Your Global Cache Detective: RubyVM.stat

In CRuby, you can't "fix" the cache-invalidation problem except by not defining methods on Object, Kernel or BasicObject in your inner loop. If you need to define new methods constantly, do it in some other class.

But how can you tell if you — or one of your dependencies — are doing that?

Remember Ruby's GC.stat? In the same way, Ruby has a RubyVM.stat that tracks changes to (what it calls) "global_method_state".

2.5.0 :002 > RubyVM.stat
 => {:global_method_state=>137, :global_constant_state=>1115, :class_serial=>7109}

Each of these tracks an internal quantity for the VM. :global_method_state is the one we’ve just been talking about. :global_constant_state is the counter for constants. And class_serial is complicated to explain — but basically, for each individual class, they keep a similar counter, and this number assigns the value across all classes. Unlike global_method_state, it’s not a big performance problem if you reset it. Mostly you can ignore class_serial.

Great! But How Do I Use It?

You might reasonably ask, “what would I do with this?” If you’re finding your app slows down unaccountably at some times, or you’re worried that it might, it’s one more thing you can check - has global_method_state increased a lot? Or has global_constant_state increased a lot? Either case might mean that you’re doing something slow that busts your caches. OpenStruct has had this problem, for instance (edit: fixed to say OpenStruct, not Struct - thanks, commenter!)

If you’re using Rails, you might consider doing this in a Rack middleware - if either of those increases during a request, that may be a sign that you’re using something slow and you’ll want to look into that.

Ruby's Main Object Does What?

Last week, Mischa talked about debugging a problem with Rake tasks polluting the global namespace. This week, I'll talk about how that happens.

Spoiler: there will be a bunch of spelunking, first in Ruby and then in Ruby's C source code. We'll talk about how to find code in both places.

What Are We Looking For?

When we define methods on Ruby's top-level "main" object, they show up as instance methods on Object so they're callable absolutely everywhere. But "main" is mostly just a random instance of Object, and isn't part of the standard class hierarchies like, say, Kernel or Object. So why and how do those methods show up on Object?

First I checked what was going on in irb. You don't know what you're looking for if you can't write an example! Here's what I did:

2.5.0 :001 > def bobo; print "Bobo!"; end
=> :bobo
2.5.0 :002 > class NotBobo; def really; puts "I am pretty sure."; bobo; end
2.5.0 :003?>   end
=> :really
2.5.0 :004 > a = NotBobo.new
=> #<NotBobo:0x00007fc0fe031940>
2.5.0 :005 > a.really
I am pretty sure.
Bobo! => nil

That bit where it prints "Bobo!" instead of giving a no-method exception? That's the feature we're talking about.

(I hadn't actually known this happened. How'd I miss that? If you already knew that, congratulations! This is a good time to feel superior about it ;-)

This is a good example of a feature that can be mysterious if you don't already know it. So let's talk about how you'd track this feature down, by talking about how I recently did do that.

Searching in Ruby

Ruby is a language with great reflection methods. So first, we'll check where that behavior could be coming from in Ruby.

The obvious places to look for weird behavior in Ruby are the object in question ("main" in this case,) and Object and Kernel.

Let's see what "main" actually is in irb:

2.5.0 :001 > self
 => main
2.5.0 :002 > self.class
 => Object
2.5.0 :003 > self.singleton_class
 => #<Class:#<Object:0x00007f9ea78ba2e8>>
2.5.0 :005 > self.methods.sort - Object.methods
 => [:bindings, :cb, :chws, :conf, :context, :cws, :cwws, :exit, :fg,
     :help, :install_alias_method, :irb, :irb_bindings, :irb_cb,
     :irb_change_binding, :irb_change_workspace, :irb_chws,
     :irb_context, :irb_current_working_binding,
     :irb_current_working_workspace, :irb_cwb, :irb_cws, :irb_cwws,
     :irb_exit, :irb_fg, :irb_help, :irb_jobs, :irb_kill, :irb_load,
     :irb_pop_binding, :irb_pop_workspace, :irb_popb, :irb_popws,
     :irb_print_working_binding, :irb_print_working_workspace,
     :irb_push_binding, :irb_push_workspace, :irb_pushb, :irb_pushws,
     :irb_pwb, :irb_pwws, :irb_quit, :irb_require, :irb_source,
     :irb_workspaces, :jobs, :kill, :popb, :popws, :pushb, :pushws,
     :pwws, :quit, :source, :workspaces]

Huh. That feels like a lot of instance methods on the top-level object, and a lot of them irb-flavored. Maybe this is somehow an irb thing? Let's check in a separate Ruby source file with no irb loaded.

# not_in_irb.rb
def method_on_main
  print "Bobo!"
end

class NotBobo
  def some_method
    print "Yup.\n"
    method_on_main
  end
end

a = NotBobo.new
a.some_method

print "\n\n\n"

print (self.methods.sort - Object.methods).inspect

Feel free to run it. But when I do, I get an empty list for the methods. In other words, that whole list of methods on main that aren't on Object came from irb, not from plain Ruby. Which seems like there's nothing terribly special there. Hrm.

What about the singleton class (a.k.a. eigenclass) for main? Maybe it has something individually that works only on that one object? Again, I'll run this outside of irb.

dir noah.gibbs$ cat >> test2.rb
print self.singleton_class.instance_methods
dir noah.gibbs$ ruby test2.rb
[:inspect, :to_s, :instance_variable_set, :instance_variable_defined?,
 :remove_instance_variable, :instance_of?, :kind_of?, :is_a?, :tap,
 :instance_variable_get, :public_methods, :instance_variables, :method,
 :public_method, :define_singleton_method, :singleton_method,
 :public_send, :extend, :pp, :to_enum, :enum_for, :<=>, :===, :=~,
 :!~, :eql?, :respond_to?, :freeze, :object_id, :send, :display,
 :nil?, :hash, :class, :clone, :singleton_class, :itself, :dup,
 :taint, :yield_self, :untaint, :tainted?, :untrusted?, :untrust,
 :trust, :frozen?, :methods, :singleton_methods, :protected_methods,
 :private_methods, :!, :equal?, :instance_eval, :instance_exec, :==,
 :!=, :__id__, :__send__]

Ah. There's a lot there, and I'm not finding it terribly enlightening. Plus, it's really likely whatever method I care about is going to be defined in C, at least in CRuby. Let's bit the bullet and check Ruby's source code...

No Luck - Let's Make More Luck

Okay. That pretty much exhausts our Ruby options. Beyond this point, it's time to check in C. I'll use the latest (August 2018) Ruby code, because this is a long-term feature and hasn't changed any time recently -- sometimes you'll need a very specific version of the Ruby code, depending what you're looking for.

So: first we're looking for the "main" object. The word "main" is used in lots of places in Ruby, so that will be hard to track down. How else can we search?

Luckily, we know that if you print out that object, it says "main". Which means we should be able to find the string "main", quotes and all, in C. I'm going to use The Silver Searcher, a.k.a. "ag", for code search here - you can also use Ack, rgrep, or your favorite other tool.

We can ignore anything under "test", "spec", or "doc" here, so I won't list those out.

thread.c
4936:    rb_define_singleton_method(rb_cThread, "main", rb_thread_s_main, 0);

object.c
3692: *     String(self)        #=> "main"

vm.c
3177:    return rb_str_new2("main");

addr2line.c
782:    if (line->sname && strcmp("main", line->sname) == 0)

lib/rdoc/generator/template/darkfish/index.rhtml
15:<main role="main">

lib/rdoc/generator/template/darkfish/servlet_root.rhtml
16:<main role="main">

lib/rdoc/generator/template/darkfish/servlet_not_found.rhtml
13:<main role="main">

lib/rdoc/generator/template/darkfish/page.rhtml
15:<main role="main" aria-label="Page <%=h file.full_name%>">

lib/rdoc/generator/template/darkfish/table_of_contents.rhtml
2:<main role="main">

lib/rdoc/generator/template/darkfish/class.rhtml
19:<main role="main" aria-labelledby="<%=h klass.aref %>">

iseq.c
742:    const ID id_main = rb_intern("main");

(...)

Okay. The one in object.c is on a comment line. The ones in the rdoc generator are classes on markup elements -- not what we're looking for. The one in thread.c is for the name of the main thread. Let's investigate addr2line.c, iseq.c and vm.c as the promising choices.

In addr2line.c it's messing with printed text in dumping a backtrace, not defining methods on the top-level object:

void
rb_dump_backtrace_with_lines(int num_traces, void **traces)
{
  /* ... */
    /* output */
    for (i = 0; i < num_traces; i++) {
        line_info_t *line = &lines[i];
        uintptr_t addr = (uintptr_t)traces[i];
        /* ... */
        /* FreeBSD's backtrace may show _start and so on */
        if (line->sname && strcmp("main", line->sname) == 0)
            break;

(By the way - in C, anything surrounded by the slash-star star-slash things are comments.)

In iseq.c, it's naming the types of instruction sequences -- not what we want:

static enum iseq_type
iseq_type_from_sym(VALUE type)
{
    const ID id_top = rb_intern("top");
    const ID id_method = rb_intern("method");
    const ID id_block = rb_intern("block");
    const ID id_class = rb_intern("class");
    const ID id_rescue = rb_intern("rescue");
    const ID id_ensure = rb_intern("ensure");
    const ID id_eval = rb_intern("eval");
    const ID id_main = rb_intern("main");
    const ID id_plain = rb_intern("plain");

Finally, in vm.c we find something really promising:

/* top self */

static VALUE
main_to_s(VALUE obj)
{
    return rb_str_new2("main");
}

VALUE
rb_vm_top_self(void)
{
    return GET_VM()->top_self;
}

That's more like it! And it's next to something being referred to as "top self", which sounds pretty main-like. So let's see where main_to_s gets used. I checked with ag, and it's only in vm.c:

void
Init_top_self(void)
{
    rb_vm_t *vm = GET_VM();

    vm->top_self = rb_obj_alloc(rb_cObject);
    rb_define_singleton_method(rb_vm_top_self(), "to_s", main_to_s, 0);
    rb_define_alias(rb_singleton_class(rb_vm_top_self()), "inspect", "to_s");
}

Cool. That looks useful - it's defining the main object's method "to_s", so we have the right object. It's saying that main is an instance of Object (called "rb_cObject" here) and defining to_s and inspect on it. That doesn't tell us anything else special about main... But knowing that it's called "top_self" does let us find other files that use main, so let's look for it.

Main, top_self, Ruby

If we search for 'top_self', we can see what we do with main. In fact, we can see everything Ruby does with the main object...

inits.c
24:    CALL(top_self);

eval.c
1940:    rb_define_private_method(rb_singleton_class(rb_vm_top_self()),
1942:    rb_define_private_method(rb_singleton_class(rb_vm_top_self()),

internal.h
1908:PUREFUNC(VALUE rb_vm_top_self(void));

vm_method.c
2131:    rb_define_private_method(rb_singleton_class(rb_vm_top_self()),
2133:    rb_define_private_method(rb_singleton_class(rb_vm_top_self()),

ruby.c
671:    VALUE self = rb_vm_top_self();

vm.c
466:    vm_push_frame(ec, iseq, VM_FRAME_MAGIC_TOP | VM_ENV_FLAG_LOCAL | VM_FRAME_FLAG_FINISH, rb_ec_thread_ptr(ec)->top_self,
2142:   rb_gc_mark(vm->top_self);
2443:    RUBY_MARK_UNLESS_NULL(th->top_self);
2574:    th->top_self = rb_vm_top_self();
3093:   th->top_self = rb_vm_top_self();
3101:   th->ec->cfp->self = th->top_self;
3181:rb_vm_top_self(void)
3183:    return GET_VM()->top_self;
3187:Init_top_self(void)
3191:    vm->top_self = rb_obj_alloc(rb_cObject);
3192:    rb_define_singleton_method(rb_vm_top_self(), "to_s", main_to_s, 0);
3193:    rb_define_alias(rb_singleton_class(rb_vm_top_self()), "inspect", "to_s");

vm_eval.c
1394:    return eval_string_with_cref(rb_vm_top_self(), rb_str_new2(str), NULL, file, 1);
1406:    return eval_string_with_cref(rb_vm_top_self(), arg->str, NULL, arg->filename, 1);
1474:    VALUE self = th->top_self;
1479:    th->top_self = rb_obj_clone(rb_vm_top_self());
1480:    rb_extend_object(th->top_self, th->top_wrapper);
1484:    th->top_self = self;
1516:       val = eval_string_with_cref(rb_vm_top_self(), cmd, NULL, 0, 0);

vm_core.h
596:    VALUE top_self;
870:    VALUE top_self;

lib/rdoc/parser/c.rb
476:      next if var_name == "ruby_top_self"

proc.c
3175:    rb_define_private_method(rb_singleton_class(rb_vm_top_self()),

load.c
577:    volatile VALUE self = th->top_self;
589:    th->top_self = rb_obj_clone(rb_vm_top_self());
591:    rb_extend_object(th->top_self, th->top_wrapper);
619:    th->top_self = self;
996:            handle = (long)rb_vm_call_cfunc(rb_vm_top_self(), load_ext,

mjit.c
1503:    mjit_add_class_serial(RCLASS_SERIAL(CLASS_OF(rb_vm_top_self())));

variable.c
2188:    state->result = rb_funcall(rb_vm_top_self(), rb_intern("require"), 1,

Since we're looking for what happens when we define a method on main, we can ignore some of what we see here.  Anything that's being initialized, we can skip. And of course, we can still ignore anything in doc, test or spec. Here's roughly how I'd summarize what I see in these files when I check around for top_self:

  • inits.c: initialization, like the name says
  • internal.h, vm_core.h: in C, files ending in ".h" are declaring structure, not code; ignore them
  • mjit.c: this is initialization, so we can skip it.
  • lib/rdoc/parser/c.rb: this is documentation.
  • load.c: this is interesting, and could be what we're looking for... except it happens only on require or load.
  • variable.c: this is only on autoload, and it just makes sure we autoload into the main object.
  • vm_method.c: this is defining "public" and "private" as methods on main. Neat, but not what we're looking for.
  • eval.c: this is defining "include" and "using" as methods. Also neat, also not quite it.
  • ruby.c: this takes some tracking down... But it's just requiring any library you pass with "-r" on the command line into main.
  • vm_eval: this is defining various flavors of "eval", which happen on the "main" object.
  • vm.c: there's a lot going on here - initialization and garbage collection, mostly. But in the end, it's not what we want.
  • proc.c: finally! This is what we're looking for.

Any Ruby source file can be great for learning more about Ruby. It's worth your time to search for "top_self" in any of these files to see what's going on. But I'll skip to the actual thing we're looking for - proc.c.

 

The Hunt... And the Capture!

In proc.c, if you look for top_self, here's the relevant bit:

void
Init_Proc(void)
{
  /* ... */
  rb_define_private_method(rb_singleton_class(rb_vm_top_self()),
                          "define_method", top_define_method, -1);

And that's what we were looking for. It's in an init method, it turns out - d'oh! But it's also redefining 'define_method' on main, which is exactly what we're looking for.

What does top_define_method do? Here it is, the whole thing:

static VALUE
top_define_method(int argc, VALUE *argv, VALUE obj)
{
    rb_thread_t *th = GET_THREAD();
    VALUE klass;

    klass = th->top_wrapper;
    if (klass) {
        rb_warning("main.define_method in the wrapped load is effective only in wrapper module");
    }
    else {
        klass = rb_cObject;
    }
    return rb_mod_define_method(argc, argv, klass);
}

Translated from C, and from Ruby internals this is saying several things.

If you're doing a load-in-an-anonymous-module, it tells you that you probably won't get what you want. Your top-level definition won't be properly global since you asked it not to be. And then it defines the method on Object, as an instance method, not just on main. And that is the feature we were looking for, defined in Ruby.

Winding Up

We've succeeded! We tracked down a simple but well-hidden feature in the CRuby source. It's cocoa and schnapps all around!

Benoit Daloze of TruffleRuby points out that this is all much easier to read if you define your Ruby internals in Ruby, like they do. He's not wrong.

But in case you're still using CRuby for things like Rails... It may be worth your time to learn to look around in the internals! And I find that nothing makes it work better than practice.

Also, this tells us exactly what's up with last week's post about Rake! Now we know how that works, which may be important when we wish to fix that...

Rake Does What?: A Debugging Story

The Mystery

While working on upgrading one of our apps to Rails 5 I noticed that suddenly migrations were failing with the following error:

StandardError: An error has occurred, all later migrations canceled:
wrong number of arguments (given 2, expected 1)

The migration it was failing on just had something like this:

class LeakyMigration < ActiveRecord::Migration[4.2]
  results = select_one <<~SQL
    SELECT 1 FROM table_name
  SQL

  if results
    raise "Panic!"
  end
end

Digging into the ActiveRecord 4.2 and 5.0 documentation it became pretty clear that there wasn't a change to the method signature for select_one, so what gives? Looking at the stack trace I noticed that the method call was going through one of our private gems - specifically through a rake task we have defined there. Huh?

The Investigation

I opened the rake task and found something like the following:

namespace :db do
  task :leaky_task do
    min_db_schema_version
    ...Other things
  end

  def select_one
    ActiveRecord::Base.connection.select_one(ActiveRecord::Base.send(:sanitize_sql, sql, "NONE"))
  end
  
  def min_db_schema_version
    select_one('SELECT min(version) AS version FROM schema_migrations')['version']
  end
end

A quick check found that the method signature for sanitize_sql_for_conditions had changed. An optional second parameter was no longer supported, so it was understandably upset when we tried to give it a second parameter.

But why is this select_one being called during the migration anyways? Do methods defined in a namespace get shared across the namespace? While perhaps a bit inconvenient, that wouldn't be completely unreasonable. A quick check eliminated that possibility. Even with a different namespace the exception was getting raised during the migration.

It couldn't possibly be defining the method on Object, could it?

The Experiment

Well, a bit of quick investigation showed me two things:

You can experiment with this by creating a new empty rake task, defining a method foo and then starting a Rails console. Try something like User.send(:foo) and you’ll get a NoMethodError - so far so good, but now load your application’s rake tasks (which aren’t loaded by default) by calling AppName::Application.load_tasks and call User.send(:foo) again.

It runs! Madness!

These two things alone are worth digging into more, but first I think we should answer the question - why is our migration calling our select_one rather than the one provided by ActiveRecord?

In order to find out I commented out the offending rake task and added a raise statement to the ActiveRecord definition of select one. Re-running the migration yielded a stack trace that highlighted something very interesting...

The Finale

All those nice little sql methods like select_one() and update() aren't defined in ActiveRecord::Migration or any of its parents, they are all defined on the connection object. ActiveRecord::Migration defines a custom method_missing implementation that forwards the call to the connection object.

And as you remember, Rake is defining all its tasks as methods on Ruby's main object, which defines them on Object, which defines them nearly everywhere.

Unfortunately these two things combined lead to trouble. When rake tasks are loaded (which they are when they are running) these methods defined on main get defined on every object. Then, when we call select_one in the migration it sends the call to itself which it now knows how to respond to since it inherits from Object.

You might reasonably ask, "how does defining a method on 'main' define it everywhere?" You can check that it does do that for yourself. But next week, we'll dig down into how that happens and why it works.

Does ActionCable Smell Like Rails?

Not long ago, Rails got ActionCable. ActionCable is an interface to WebSockets and (potentially) other methods of turning a normal sent-to-browser web page into a two-way connection that can keep exchanging data. There have been a lot of these attempts over the years - WebSockets, Server-Sent Events, Comet and Server Push (HTTP1 and HTTP2) are all protocols to do that. There have been many Ruby implementations of these. Good old AJAX was probably the first widespread attempt. I'm sure there are some I missed. It's been the "near future" for as long as there's been a present.

Do you already know why we want WebSockets, and how bad most solutions are? Scroll down to "ActionCable and Convenience" below. Or heck, scroll wherever you like - I'm not the boss of you.

Yeah, But Why?

One big question is always: do we need this? What good does it do us?

Let's look at one-way updates (server-to-browser) separately from two-way.

The "hello, world" of one-way interactive web pages is the page update. When Twitter or Facebook shows you how many updates are waiting, or pops up new posts, those are interactive web pages. In order to tell you what's waiting for you, or to deliver it, they have to interact with a page that's already in your browser. You can also show site news banners that change, or pop up an alert like "site going down for maintenance at midnight!" The timing is often not that important, so a little delay is usually fine. AJAX is the usual way to do this, or Server-Sent Events can be a bit more efficient at the cost of some complexity.

 See that "99+" up there? For you to know how many notifications are waiting, Twitter has to tell your browser somehow.

See that "99+" up there? For you to know how many notifications are waiting, Twitter has to tell your browser somehow.

The "hello, world" of two-way interactive web pages is the chatbox. From a simple "everybody sees everything" single-room chat to something more complex with sub-areas and filtering and private messages, you have to have two-way interaction to make chat work. If you put intercom.io or similar customer service chatbots on your site, they have to use two-way interaction of some kind. Any kind of document collaboration (e.g. Google Docs) or multiplayer game needs it. Two-way interaction starts to make sense quickly if you're doing something that people update together - auction sites, for instance, or fast-updating financial markets (stock trading, futures markets, in-game auction houses.) You can do two-way interaction with AJAX but it starts to get inefficient quickly, and the latency can be high. Long-polling (e.g. Comet, SSE) can help, or you can work around the browser with plugins (e.g. Flash or Unity.) This is the situation WebSockets (and thus ActionCable) were really designed for.

There are one-way updates in the browser-to-server direction, but forms and AJAX handle those just fine. They are often things like form submit, error reporting or analytics where the user takes action, but there's only a simple acknowledgement -- "yup, you did that, the server saw it and you can get back to what you were doing."

So What's Different With Interactivity?

In old-style HTTP, the browser gets an HTTP page. It may already be cached, so the browser may not use the network at all! Some pages can work entirely offline. The browser can keep requesting pages or resources from the server when and if it wants to. That may fail - the network connection isn't guaranteed. But the server can't interfere with most of this. The server can't send anything it wasn't asked for. The server may not even know that this is all going on. If the HTML page came from cache, the server probably has no idea that any of this is happening. In a normal web framework, the request gets served and the server moves on to other requests without a backward glance.

If you want interactivity with a constant connection to the server, that changes everything. The server needs to sit and spin, holding the connection open. It needs to see what other connections are doing and figure out what to send to everybody else every time you do something. If Bob sends a chat message, it may go out to 250 other people who have a certain page open.

The Programming Model

One big difficulty with two-way interaction is that web servers and frameworks aren't usually designed to handle it. If you keep a connection open all the time and you can send to it at random times... How does that look to the programmer? How does it work? That's new for Rails... And Sinatra. And nearly any Ruby framework that isn't using EventMachine. So... nearly all of them.

It's not just WebSockets that have this problem, incidentally. HTTP/2 adds server push, which also screws up all the frameworks... But not quite as much as WebSockets does.

In fact, very few application servers (e.g. Puma, Passenger, Unicorn, Thin, WEBrick) are designed for pushing from the server either. You can hack long-polling or server push on top of NGinX or Apache, but... that's not how nearly anybody uses them. You really want an evented server. There are some experimental Ruby evented webservers. But mostly, that's not how you write Ruby web applications. Or nearly any other web applications, in any language. It's an inefficient match for HTTP1 apps, but it's the only reasonable match for HTTP/2 apps with much two-way interactivity.

Node.js folks can laugh at the rest of us here. They do evented web applications all the time. And even they have a bit of a rough road to HTTP/2 and WebSocket support, because their other tools (reverse proxies, caches, etc) are designed in the traditional way, not the HTTP/2 way.

The whole reason for HTTP's weird, hacked-on model of sessions and cookies is that it can't keep a connection to the server or repeatedly identify individual clients. What would we do if it could? Maybe we'll find out.

But the fact remains that it's weird and hard to combine a long-running stateful server with lots of connections (WebSockets) to a respond-and-forget stateless web server (NGinX, Apache, Puma, Passenger, etc.) in the current day and age. They're not designed to work that way.

I only know of one web server that was ever designed with this in mind: Zed Shaw's Mongrel2. It's a powerful, interesting model for a server architecture. It's also rough around the edges, very raw and not used in production by anybody I know of. I've tried to get it running with Ruby web apps, and mostly you rapidly discover that nobody else designs anything that way, so it's hard to use with anything you don't write from scratch. There are bespoke frameworks inside big tech companies (e.g. LinkedIn, Facebook) that do the same thing. That makes sense. They'd have to, even with their current architecture. The rest of us have a long road to get there.

ActionCable and Convenience

So: Rails bundled WebSocket support into recent versions. Problem solved, right? Rails is awesome about making things convenient, so we're golden?

Yes... and no.

I've worked with the old solutions to this like Faye and Juggernaut. There is absolutely no question that ActionCable is a smoother experience. It starts the extra server automatically. It includes all the extra pieces of software. Secure sockets work approximately out-of-the-box, maybe, with a bunch of ugly caveats that aren't the server's fault. But they're ordinary HTTPS caveats, not really new ones for WebSockets. Code reloading works, kind of. When you hit "reload" in your browser the classes reload at least, like, 80% of the time. I mean, unless you have connections from multiple browsers or something else unreasonable.

Let's be clear: for hooking WebSockets up to a traditional web framework this is a disturbingly smooth experience. That's how bad the old ways of doing this are. Many old problems like "have I restarted both servers?" are nearly 100% solved in development, and are only painful in production. This is a huge step up.

The fact that code reloading works at all, ever, is surreal. You can nearly always get it to work by restarting the server and then hitting reload in the browser. Even that wasn't really reliable with older frameworks (ew, browser caching.) Thank you, Rails asset pipeline! And there's only one server in development mode, so you don't have to bring down multiple server-side processes and reload your browser.

This stuff used to be incredibly bad. ActionCable brings the pain level down to something a smartphone-app programmer would call tolerable.

(I'm an old phone operating system programmer. We had to flash ROMs every time. You kids don't know how good you have it. Get off my lawn!)

But... Does It Smell Like Rails?

Programming in Rails is a distinctive experience. From the controller actions to the views to the config files, you can just glance at a chunk of Rails code to figure out: yup, this is Ruby on Rails.

For an example of "it's not the web, but it smells like Rails," just look at ActionMailer. It may be about sending email, but it uses the same controller actions and the views feel like Rails. Or look at the now-defunct ActiveResource, which brings an ActiveRecord-style API to remote procedure calls. Yeah, okay, it was kind of a bad idea, but it really looks like Rails.

I've been trying to figure out for awhile: is there a way to make ActionCable look more like Rails? I'll show you what I've found so far and I'll let you decide.

ActionCable kind of wants to be structured like Rails controllers. I'll use examples from the ActionCable Rails Guide, just to make sure I'm not misrepresenting it.

Every user, when they connect to your site, gets a single ApplicationCable::Connection object:

# app/channels/application_cable/connection.rb
module ApplicationCable
  class Connection < ActionCable::Connection::Base
    identified_by :current_user
 
    def connect
      self.current_user = find_verified_user
    end
 
    private
      def find_verified_user
        if verified_user = User.find_by(id: cookies.encrypted[:user_id])
          verified_user
        else
          reject_unauthorized_connection
        end
      end
  end
end

You can think of it as being a bit like ApplicationController. Of course, all your Channels inherit from ApplicationCable::Channel, which is also a bit like ActionController. In the Guide there's not much in it. In fact, it can be very problematic to have many methods in it. I'll talk about why a bit later.

Then you have actual Channels inherited from ApplicationCable::Channel, which correspond roughly to pub/sub subscriptions, and then you can subscribe to one or more streams on each channel, which are sort of subchannels. This is explained poorly, but it's not too complicated once you figure it out. ActionCable wants one more level of channel-ness than most pub/sub libraries do, but it still boils down to "listen on channel names, get messages for those channels." ActionCable's streams are what every other Pub/Sub library would call a channel, approximately, and ActionCable's Channels are a namespace of streams.

It's clearly trying to look like a Rails controller:

# app/channels/chat_channel.rb
class ChatChannel < ApplicationCable::Channel
  def subscribed
    stream_from "chat_#{params[:room]}"
  end
 
  def receive(data)
    ActionCable.server.broadcast("chat_#{params[:room]}", data)
  end
end

But then that starts breaking down...

Where ActionCable Smells Funny

ActionCable actions are a bit like Rails controllers, and they use a somewhat different API than every other Pub/Sub library to get it. That's not necessarily a problem. In fact, that's fairly Rails-like, as far as it goes.

An ActionCable action can only be called from a browser, though. So there's two fairly distinct forms of sending: browsers send actions, while the server sends "broadcasts", which can also be sent from the server, including from actions. That's a bit like Rails controller actions, though it's a bit unintuitive. And then the server sends back JSON data, like a JSON API.

Then, JavaScript receives the JSON data and renders it. Here's what that looks like on the client side:

# app/assets/javascripts/cable/subscriptions/chat.coffee
# Assumes you've already requested the right to send web notifications
App.cable.subscriptions.create { channel: "ChatChannel", room: "Best Room" },
  received: (data) ->
    @appendLine(data)
 
  appendLine: (data) ->
    html = @createLine(data)
    $("[data-chat-room='Best Room']").append(html)
 
  createLine: (data) ->
    """
    <article class="chat-line">
      <span class="speaker">#{data["sent_by"]}</span>
      <span class="body">#{data["body"]}</span>
    </article>
    """

This should look like exactly what Rails encourages you NOT to do. It's ignoring everything Rails knows about HTML in favor of a client-side framework, presumably separate from whatever rendering you do on the server side. You can't easily use Rails view rendering to produce HTML, and no Rails controller is instantiated, so you can't use things like render_to_string. There's probably some way to patch it in, but that doesn't seem to be an expected choice. It's certainly not done by default, and at a minimum requires including Rails' internal modules into your Channels.

Also: please don't include random modules into your Channels, because every public method on every Channel object becomes publicly callable via JavaScript, so that's a really unsafe thing to do. So I would absolutely not recommend you include the Rails modules for view rendering into your Channels, because I would not recommend including anything you don't control into your channels. You need near-absolute control of your set of public methods for security reasons. Exposing public calls to whatever Rails has exposed from internal modules as a public API isn't a good idea even if you love Rails views (and I do.)

I'm not saying it's hard to do it in a more Rails-like way. You can build a simple template renderer using Erubis for this if you want. I'm saying ActionCable won't do this for you, or particularly encourage you to do it that way. The documentation and examples all seem to think that ActionCable should ship JSON around and your JavaScript/CoffeeScript client code should do the rendering. To be fair, that's likely to be more space-efficient than shipping around HTML, especially because WebSocket compression is fairly primitive (e.g. permessage-deflate). But compromising the programmer API for the sake of bandwidth fails the "smells like Rails" sniff test, in my opinion. Especially because it's entirely possible to have the examples mostly use the server-rendering-capable flavor and mention that you can convert to JSON and client rendering rather than vice-versa.

Here's what a very simple version of that Erubis template rendering might look like. It's not difficult. It's just not there by default:

def replace_html_with_template(elt_selector, template_name, locals: {})
  unless template_name["."]
    template_name += ".html.erb"
  end
  filename = File.join("app/views", template_name)
  tmpl = Erubis::Eruby.new(File.read filename)
  replace_html(elt_selector, tmpl.evaluate(locals))
end

(The replace_html method above needs to be a broadcast to the client, in a format the client recognizes. Also, don't make this a public method on the Channel object - I actually wrap the Channel object with a second object to avoid that problem.)

In general, ActionCable feels simple and "raw" compared to most Rails APIs. Want HTML escaping? Ask for it explicitly. Curious who is subscribed to what? Feel free to track that yourself. Wonder what some of the APIs do (e.g. identified_by)? Read the source code.

That may be because it's an immature API. ActionCable is fairly recent, as Rails APIs go. Or it may be the ActionCable isn't going to be as polished. WebSockets are a thing you add to an existing app for better performance, as a rule. So maybe they're being treated as an advanced feature, and the polish isn't wanted. It's hard to tell.

A Few More Details on How It Works

Before wrapping up, let's talk a bit more about how ActionCable does what it does.

If you're going to have a bunch of connections held open, you're going to need network sockets for it and some way to do work for those connections. ActionCable handles this with a thread pool, with four workers by default. If you're messing with the database in your ActionCable code, that means you should increase your database connection pool by that number of workers to avoid running out.

ActionCable eats a ton of memory to do its thing. That's fine - it's the best Ruby environment for WebSocket development, and you didn't start using Rails because it was the fastest or smallest - if you wanted that, you'd be writing bespoke C binaries for CGI scripts, or maybe assembly language.

But when you're looking at deploying to a production environment with a reasonable number of users, you may want to look for a more efficient, API-compatible solution like AnyCable. You can find a RubyKaigi talk by Vlad Dementyev about that and how it compares, if you'd like more information.

While ActionCable uses only a single server in development mode, it's assumed you'll want multiple in production. 

So What's the Takeaway?

ActionCable is a great way to reduce the pain of WebSockets development compared to the older solutions out there. It provides a relatively clean development environment. The API is unusual - not quite Rails, not quite standard Pub/Sub. But it's decently clean and usable once you get used to it.

A Rails API is often carefully polished. They tend to be hard to misuse with strong indicators of how to do things The Rails Way and excellent documentation. ActionCable isn't there yet, if that's where they're headed. You'll need more sophistication about what ActionCable builds on to avoid security problems - more like old-style Rails routing or controllers, less like heavily-secured Rails APIs like forms, HTML escaping or forgery protection.

How Can I Use Ruby 2.6 JIT?

I gave a talk on using JIT in Ruby 2.6 at Southeast Ruby - a great regional conference with a very friendly, cozy vibe. If you get a chance, I highly recommend going next year! It'll be August 1st and 2nd, 2019.

Wondering about what JIT is, how it works and why you'd use it? Or how to try it out in (currently pre-release) Ruby 2.6? Here are my slides. Don't miss the presenter notes, which have extra detail beyond just what's in the slides.

 

Can I Use Ten 10% Speedups to Make Ruby Instant?

There are a lot of little speedups to Ruby around. I write about a bunch of them. It wouldn't be too hard to collect 100% worth of little 3% and 5% and 10% speedups. Presumably it won't make every Ruby program instant, but what would it do? Heck, Ruby has had way more than ten big speedups over the years. Shouldn't Ruby be instant right now?

Let's talk about how you do performance math - how you check those improvements and how they add up. Then when you find speedup in the wild, you can guess a little about how much they'll help you.

Why Rockets Explode On The Launchpad

When I was a kid, we did a math exercise in class. NASA has a hard job because rockets are complicated. Each piece of a rocket has to be very reliable.

How reliable?

Well, say the pieces are 99.9% reliable and you have 10,000 of them. How likely is it that at least one piece fails and your rocket blows up?

About 99.996% likely to blow up. Don't put anything you care about (like your torso) into a rocket with those numbers. Luckily nobody in my middle school was likely to be an astronaut, so it all worked out.

As a kid, the point was "rockets should blow up all the time. I wonder if I'll get to watch?"

As a crusty old adult, I suggest the lesson, "multiplying over and over does some things you don't expect."

So let's talk performance.

Speeding Up Ruby

It's easy to find little things that give a 5% speedup here or a 10% speedup there in Ruby. For some of them you just upgrade, but others want some configuration.

So what if you added up all of them? What if you grabbed five 10% speedups and ten 5% speedups. That's 100% speedup, so any Ruby code should finish instantly with the right answer, right?

Alas, no. Three 10% speedups sound like they should add up to a 30% speedup. But it's not addition - it's repeated multiplication, like rocket reliability.

Let's talk math, future astronaut.

Let's say you have a Ruby program that takes ten seconds to run and you'd like it faster. You apply one of those 10% speedups, which works perfectly and brilliantly. Now your program runs in 9 seconds. Yay!

So you apply another 10% speedup. Unfortunately, it's not saving you 10% of ten seconds. It's saving you 10% of nine seconds - that other second is gone already. So you save nine tenths of a second, for a runtime of 8.1 seconds, not 8 seconds.

So your speedup isn't 10% + 10% = 20%. Instead, it's 90% of the runtime times 90% of the runtime is 81% of the runtime. So two 10% speedups add up to 19%. And that's why you get 8.1 seconds, not 8 seconds.

Hey, I didn't make the rules.

(There are actually a few cases where they add up to more than that for complicated reasons. Those cases are weird and rare in the real world.)

Except the Real World Sucks

Now if you take two actual 10% speedups and measure the result, you're likely to be disappointed. You often won't get 20% or even 19%. It may be more like 16% or 18%. Sometimes it'll be 10% - both speedups together are exactly as good as just one.

Why?

Sometimes it's because they solve the same problem. If your first optimization is to optimize garbage collection for a 2% speedup, and your second optimization is to turn off garbage collection completely for a 5% speedup, your total will be 5%. That first 2% doesn't do you any good at all.

In general, the more two optimizations "touch", the less their total is going to be. If you save 7%, 10%, 5% and 3% on four different CPU optimizations, it will almost never add up to 23% (note: 0.93 * 0.90 * 0.95 * 0.97 = 0.771, or about 22.9% speedup if they all work together perfectly.) There's generally some overlap between one optimization and another.

But it's worse than that. Because the rocket reliability math is too optimistic.

mountain_dew_bottle.png

Two Liters In Fifteen Seconds: GO!

If you were a mathematically inclined fourteen-year-old with nothing to do, you might decide to try a scientific experiment. Specifically, imagine your parents weren't paying attention for a bit, you could try to roll up as many Dungeons and Dragons characters as possible in a short time. Okay, maybe this is just me. Uh, or some anonymous fourteen-year-old who is only in an example.

You could roll and write out the characters for ten minutes. Let's say you could do about five high-school-quality D&D characters in that time.

But! You discover that by chugging a liter of coffee first, you can get it to six characters in ten minutes. Or with a liter of sprite, five and a half. We'll ignore the character quality, and how many are ripoffs of Drizzt Do'Urden.

So then, how many characters can you write out if you chug a two-liter of Mountain Dew, which combines that much caffeine and twice that much sugar?

A naive additive mathematician would say seven - five characters base, plus one more for the caffeine, plus one more (0.5 * 2) for the sugar. No problem!

A rocket-reliability mathematician would say 6.6 characters (20% speedup plus two 8.3% speedups, 0.8 * 0.917 * 0.917 = 0.673, or 32.7% speedup.)

And our example high-schooler discovers that his hand cramps up after six, plus the walls are now vibrating.

Operations Theory: The Academic Study of "This Class is Pass/Fail"

As our hand-cramp math suggests, you can optimize a particular problem all you like... And it'll help, right up until it doesn't. Mountain-Dew-fueled creativity doesn't help if your hand cramps first.

At any given time there will be one part of the program slowing you down ("the bottleneck.") If you can speed it up until something else is the slow part, any further speedup is wasted. Congratulations! You succeeded! All extra credit is rounded down to zero. Each bottleneck is pass/fail. Pass your papers forward and don't talk to your neighbor.

For instance, let's say your current bottleneck is shoving enough network packets through. It's about 7% slower than whatever your next bottleneck is. If you can find ten different ways to cut your network traffic by 5% each, then they should combine to give you... about 7% speedup. Because now the problem is the next bottleneck, and it really doesn't matter how fast or small your network packets are. Next!

If this sounds simple, I recommend using the phrase "Operations Theory" to describe it and mentioning that it comes from Eliyahu Goldratt's 1984 book "The Goal" (which it does.) Doesn't it sound fancier now?

It's also not quite this simple, as a skilled profiler can tell you. If you speed up an already-fast part of the program, it won't usually give you zero speedup. It'll usually just give you a very small one. That's one of the weird ways two small optimizations can add up to a big one - a big optimization to something that's not currently your bottleneck may turn into a really important optimization... If the bottleneck changes.

By the way - this is also why you should be careful adding up several small optimizations that work well right now. If they're currently giving good speedups, it's probably because they're related to your current bottleneck. Which means when that bottleneck is solved, they'll all turn into tiny (or zero) speedups.

So the Conclusion Is... It's Complicated

This is why it's hard to predict what five different 5% speedups add up to. How much do they overlap? Are they in bottlenecks in your program? If one of them changes the bottleneck, would one of the other speedups suddenly matter more?

I solve this by measuring with a big end-to-end performance test. That's inconvenient, but it changes "it's complicated" to "it's slow and takes a bunch of computer time."

But when people suggest just taking all the known speedups and putting them together, keep in mind that that can be complicated. If you're adding speedups into the core language, great! That means they're constantly tested together. If you're talking about rarely-used tuning knobs, those get complicated fast when you combine them.

 

Ruby's Global Method Cache

Hey, folks! Lately I've been exploring Ruby environment settings and how much they can help (or not) your app speed. I feel like I've already hit most of the major tuning knobs on Ruby at one point or another... But let's look at one I haven't yet: Ruby's global method cache. What is it? How do you set it? How much speed does it give?

What's the Global Method Cache?

When you use a particular method, Ruby has to figure out what classes and/or modules and/or refinements define it, and which one to use in that particular location. It's a much more involved process than you'd think, especially with how Ruby handles constants and scope. In a lot of cases you can figure out what defines that method once and keep using the lookup that you did the first time - it's slow to re-run, so we don't.

There are two ways Ruby saves those lookups: the inline method cache, and the global method cache. After I explain what they are, we'll talk about the global method cache.

The inline method cache lives at a specific call site. It is "inline" in the sense that it's cached in your Ruby code where you call the method. That seems simple and sane. When it works, the global method cache doesn't get used - the lookup happens the first time the code is hit, and gets reused afterward.

The global method cache is for cases where that doesn't work - method_missing, respond_to? and refinements are examples. In those cases, it's very unlikely that the same place in your code will always get the same answer for "what is the method here?" Here's how Pat Shaughnessy puts it:

Depending on the number of superclasses in the chain, method lookup can be time consuming. To alleviate this, Ruby caches the result of a lookup for later use. It records which class or module implemented the method that your code called in two caches: a global method cache and an inline method cache.

Ruby uses the global method cache to save a mapping between the receiver and implementer classes.

The global method cache allows Ruby to skip the method lookup process the next time your code calls a method listed in the first column of the global cache. After your code has called Fixnum#times once, Ruby knows that it can execute the Integer#times method, regardless of from where in your program you call times .
— Pat Shaughnessy, Ruby Under a Microscope

There are a fixed number of entries in the global method cache - by default, 2048 of them. A Shopify engineer finds that gives a 90%+ hit rate even for a really huge Rails app, so that's not bad.

You can set the number of entries, but only to a power of two, with the environment variable RUBY_GLOBAL_METHOD_CACHE_SIZE. The default is 2048, so you'll normally want to go up from there, not down. Each cache entry is 40 bytes. So the default cache uses about 80kb, and each time you double the number of entries, you double the size. So Shopify's setting of 128k entries at 40 bytes/entry would use about 2.5 megabytes of memory.

How's the Speed?

I write and maintain Rails Ruby Bench, a highly-concurrent Ruby benchmark based on Discourse, a large real-world Rails application. I do a lot of checking Ruby and Rails speed using it. And today I'll do that with Ruby's global method cache.

Discourse isn't as huge as Shopify's Rails app - few Rails apps are. Which means it may not need to increase the cache size as badly. But it certainly has far more possible cache entries than the default 2048. So it's a pretty good indicator of how much a mid-size Rails app benefits from the cache size increase.

Long-time readers will be expecting a pretty graph here, and I have bad news for them: the difference in speed when adjusting the cache size is so small that any reasonable way to graph it makes it look like they're identical - which they nearly are. Here are the results as a table:

RUBY_GLOBAL_METHOD_CACHE_SIZEMean req/secStd deviationSpeedup vs Default
1024155.31.7-1%
2048156.81.50%
4096158.32.31%
8192159.53.21.7%
16384160.33.42.2%

So as you can see, my smaller number of cached entries are... Hm. If I check that Shopify article... They actually only claimed to get about 3% faster results. So my own results are directly in line with theirs. I see a tiny speedup, in return for a very small amount of memory.

So... Is It a Good Idea?

I don't see any harm in using this. But for most users, I don't think a savings of 2%-3% is worth bothering about. And that's assuming your Rails app is fairly large. I would expect a smaller app, or a non-Rails app, to gain very little or even nothing at all.

In most cases, I think Ruby's global method cache does a great job and doesn't require adjustment.

But now you know how to check!

 

Finding the right engineers for the job

IMG_9036 (1).jpg

The best engineers

"This project will be critical to our success. Let’s get our best software engineers on it…" Maybe you’ve heard something like this where you work. The question naturally arises, who are your “best” engineers?

When I first started my career, I believed that the best software engineer was one who rigorously applied the principles of good software engineering (SOLID) in a steady fashion to create something new. My thinking on this has shifted a bit. While I still very much believe in these principles, I now see them as a part of a larger picture.

The right engineers

A few years ago I attended a conference where I heard a talk from a company called CorgiBytes. They work as consultants specializing in improving and maintaining legacy code. To many software engineers, this would be the worst job imaginable. But the founders recognized that there is a special type of engineer who enjoys and even thrives on this kind of (often desperately needed) work. So they target hiring engineers with this specific strength.

CorgiBytes describes their members as, "the joyful janitors of your codebase." This quote is heartwarming to me. It reminds me that for every need, there exists a person who not only can do the job, but will derive their greatest satisfaction from doing it. What a wonderfully merciful thing that there is someone out there who will joyfully do the very thing you cannot stand doing.

True greatness, then, exists wherever a need meets a person perfectly suited to fulfill it.

The developer landscape

Okay, so if there are software engineers wired for legacy code projects, what other types of software engineers might exist? In his conference talk, the founder of CorgiBytes discussed the following model which he dubs the "developer landscape:"

chart1.jpg

Each quadrant in the model defines a particular type of developer. In the upper left hand corner you have your hacker-maker. This engineer is more concerned with "building the right thing" rather than "building the thing right." This person loves to crank out prototypes, experiment, and fail fast. In the upper right hand corner you have the craftsman-maker. This is your engineer who is more concerned with "building the thing right," rather than "building the right thing." This person wants to steadily apply the principles of SOLID. At the start of my career, I viewed this quadrant as the one strength that defined a good software engineer.

Below the x-axis are the code menders. The lower right hand corner represents the craftsman-mender. This is the type of engineer that CorgiBytes selects for. These engineers love to gut a nasty piece of code and replace it with something clear and maintainable. Finally, you have the hacker-mender in the lower left hand corner. These are the firefighters of your system. When a service is exploding or customers are in trouble, they parachute in and save the day. They are motivated by driving to a resolution quickly.

All of these strengths are needed in the context of a company like AppFolio that develops and maintains software services because each strength maps to a phase in the software lifecycle. Nothing truly new happens without the rapid prototyper, your system is quickly crippled by technical debt without your SOLID engineer, your codebase becomes legacy without your remodeler, and your services stop running without your firefighter.

Finding the right fit

In my first software engineering job, I worked in a context where the needs were almost solely SOLID and remodeler strengths. I enjoyed this work and I learned much. But I was unaware of the rapid prototyper and firefighting quadrants. I didn’t know they existed.

Things changed for me when I joined AppFolio. AppFolio’s organizational structure and focus on generalist teams suddenly exposed me to the entire developer landscape. I quickly learned that my real strength shines in the hacker quadrants. I can be quite happy in the craftsman quadrants, but I feel most alive in my work when I’m operating as a hacker. I wish I had learned this about myself sooner!

There is amazing power in finding the right fit for an engineer based on their strengths. I’ve been involved in several projects where we stacked the deck with "the best engineers," only to see these projects stagnate. We were evaluating whether an engineer was "the best" separate from the need they were intended to fulfill. Once we began asking, "which is the right type of engineer for this project?" we saw that different strengths were needed. When we made changes accordingly, we saw those projects come to life and succeed beyond what we even imagined.

Where do you fit?

What about you? Are you a great software engineer? As I said above, we have many opportunities at AppFolio for all of the developer types, as well as opportunities for developers who want to gain experience in new areas. Our organizational structure lends itself to exploration, and our journey as a company into new business verticals guarantees new adventures for many years to come. Come join us!

Ruby Memory Environment Variables - Simpler Than They Look.

You've probably seen some of the great posts on how you can use environment variables to tune Ruby's memory use. They look complicated, don't they? If you need to squeeze out every last ounce of performance, they can be useful. But mostly, they give a single, simple advantage:

Quicker startup time. More specifically, quicker time-to-full-speed.

You can configure your Ruby process with more memory slots or looser malloc/oldmalloc limits. If you don't, your process will still grow to the right size if it needs it. The only reason to set the limits manually is if you want your process to grow to full size and speed a little more quickly. If you're running a big batch job or a long-running server, the environment settings won't matter much after the first hour or so, and only a little after the first few minutes - your process will figure it out quickly in any case.

Why the speed difference? Mostly because when Ruby is still figuring out the right size for your process's memory, it has to garbage-collect a little more often. That slows things down until it hits its stride.

There are also some environment variables that set how fast to expand. Which, again, basically just affects the time to full speed -- unless you mess them up :-)

But I Really Want...

But what if you do want to set them for some reason? How do you know what to set them to?

I find that Ruby does a fantastic job of figuring that out, but it may take some time to do it. So why not use your same settings from last run?

That's what EnvMem does.

You run your process, dump the current settings (via GC.stat) and then use them for the next run.

There's hardly any reason to use a dedicated tool, though - if you look at how EnvMem works, it only loads a few entries from GC.stat into the corresponding environment variables. The tool is just executable documentation of which GC.stat entries correspond to which environment variables.

The three variables that it sets -- RUBY_GC_HEAP_INIT_SLOTS, RUBY_GC_MALLOC_LIMIT and RUBY_GC_OLDMALLOC_LIMIT -- are the ones that get your process to the right initial size. And doing it based on your previous run is better than any other method I know.

For most applications, let them run for a minute or two until the settings are automatically set correctly. If your application doesn't run that long, then congratulations - these aren't things you need to worry about. If you need fast startup time, use EnvMem. Or just do the same thing yourself, since it's easy.

But What About...?

This all sounds reasonable, sure. But what about those last few variables? What about the ones that EnvMem doesn't bother to set?

You can tune those, sure. Keep in mind that if you tune process size, then you should not tune the other variables exactly like you would for a new process.

Specifically: for a new process, you want to make sure expansion is fast and happens in big chunks, so that you have a nice low startup time. For a process that is old and carefully tuned, you want to make sure expansion is slow and happens a little at a time so that you don't waste too much memory.

Ruby has several "max" variables to prevent adding too much of anything at once. That can be disastrous if they're set too low - it means expansion happens very slowly, so full speed only happens after the process has been running for many minutes. But for a mature, well-tuned application, good "max" values can prevent bloating by allocating too much of a resource at one time.

So with that in mind, here are the last few variables you might choose to tune:

  • RUBY_GC_HEAP_FREE_SLOTS_GOAL_RATIO: for a fast-growing app you might set this low, around 0.3 to 0.6, to make sure you have lots of free slots. For a mature app, set it much higher, even up to 0.8, to make sure you're not wasting much memory on unused slots. Keep in mind that you need free slots for new objects in new requests, so this should basically never be higher than 0.95, and rarely higher than 0.8.
  • RUBY_GC_HEAP_GROWTH_MAX_SLOTS: this is the cap on how many new slots can be added at once. I find the defaults work great for me here. But if you're actually counting slots allocated on new requests (via GC.stat) it may make sense for you to limit the number of maximum slots allocated. If you aren't counting slots with GC.stat, please don't set this manually. Don't optimize before you profile.
  • RUBY_GC_MALLOC_LIMIT_MAX: this determines the fastest rate you can raise RUBY_GC_MALLOC_LIMIT, which in turn determines how often to do a major GC (one that checks the old-generation objects, not just new.) If you're using GC.stat and watching the malloc_increase_bytes_limit, this determines how fast to raise that (at most.) Until everything in this paragraph sounds straightforward, please don't customize this.
  • RUBY_GC_OLDMALLOC_LIMIT_MAX: this is like RUBY_GC_MALLOC_LIMIT_MAX, but it affects the oldmalloc limit instead of the malloc limit. That is, it affects how often you get major GCs in response to the old generation increasing in (estimated) size. Again, if this all sounds like Greek to you the you're happiest with the default settings - which are pretty good.

Happy Tuning! Better Yet, Happy Not-Tuning!

So then, what's the upshot? If you're just skipping down this far, my recommended upshot would  be: Ruby mostly tunes its memory configuration wonderfully, and you should enjoy that and move on. The environment variables don't make a difference in the long-term runtime of your application, and you don't care about the (tiny) difference in startup/warmup time.

But let's pretend you're looking for even more detail about tuning and how/why the Ruby memory system works the way it does. May I recommend the slides from my RubyKaigi presentation? Don't skip the presenter notes, most of the interesting details are there.

 

Upgrading Rails 4 Controller Tests to Rails 5

At AppFolio, we're finally in the process of upgrading many of our Rails applications from Rails 4.2, up to Rails 5.2. Our first, and biggest step is to upgrade to Rails 5.0. While there are many parts necessary to complete this upgrade, I would like to share a few things we have done specifically to address the backwards incompatibility Rails 5 introduced in controller tests.

In case you aren't aware, it was common in Rails 4 to write controller test request methods in the following syntax:

get image_path, id: '12'

In addition to providing a params hash, perhaps you wanted to include something in the request's session, and pre-set a flash message as well. That can be accomplished via:

get image_path, { id: '12' }, { user_id: '39' },
    { success: 'page successfully created' }

Note that the first hash corresponds to params, the second hash session variables, and the third, flash values as indicated in the Rails 4.2 testing tutorial.

In addition to the get method, the test request methods, delete, head, patch, post, and put also existed in Rails 4.2. Finally, if you wanted to mimic an asynchronous request, one would instead use the xhr method, which aside from the first argument to indicate the HTTP verb, has the same method signature as the previously listed test request methods [ref]. An xhr method call might look like the following:

xhr :post, image_path, { title: 'New Image' }, { user_id: '39' }

In Rails 5, two things have changed: first, passing params, session, and flash by positional argument are no longer supported, and second, the xhr method no longer exists; it is replaced by adding an xhr: true keyword argument to one of the test request methods. Keyword arguments are awesome, and were introduced in Ruby 2.0, with required keyword arguments being introduced in Ruby 2.1 [ref].

The three prior test examples should be written for Rails 5 as follows:

get image_path, params: { id: '12' }

get image_path, flash: { success: 'page successfully created' },
    params: { id: '12' }, session: { user_id: '39' }

Note that in the above the order of the keyword arguments is irrelevant; I like to keep mine sorted.

post image_path, params: { title: 'New Image' }, session: { user_id: '39' }, xhr: true

Making any individual change from the Rails 4 syntax to that of Rails 5 is pretty trivial, however, it can be incredibly tedious and error prone when there are thousands of such invocations. To support this part of our upgrade to Rails 5, we have written two open source tools, and utilized another well known open source tool, Rubocop.

First, we wanted a way to support the Rails 5 syntax in Rails 4. Doing so would enable us to stop writing tests the old way, and instead write tests the new way. Additionally, once we've upgraded part of our application to use the new syntax, we wanted to enforce using only the new syntax in that part of the application so we didn't have to repeat the update process multiple times until we finally switched a project to depend on Rails 5. These two objectives were met through the introduction of our open source rails-forward_compatible_controller_tests gem.

This gem provides the ability for a Rails 4 application to use keyword arguments with the test request methods, as well as use the xhr: true keyword argument. Furthermore, this gem can be configured to do nothing, output DeprecationWarnings, or raise an exception when using the old syntax. The DeprecationWarning configuration is perfect while in transition, and raising exceptions is useful both while making a complete transition to the Rails 5 syntax, and afterward to ensure no regressions are introduced.

With the rails-forward_compatible_controller_tests gem in place, all that was left was to convert the thousands of Rails 4 test request method instances in our codebase. Fortunately, one tool was already available to aid in this effort. That tool is rubocop; more specifically rubocop used in combination with its Rails/HttpPositionalArguments cop.

By running the following, rubocop will autocorrect most uses of test request methods, except for uses of the xhr method:

rubocop -a --only Rails/HttpPositionalArguments

To support automatically fixing those xhr instances, we wrote and open sourced another gem, rails5_xhr_update. One way to utilize this gem is by looking for all files containing "xhr :" and passing those files to the rails5_xhr_update program with the --write option set to indicate overwriting the existing files like so:

git grep -l "xhr :" | rails5_xhr_update --write

By following the aforementioned steps we've made significant progress towards our Rails 5 upgrade. Of course, there is still a lot of work to do, and Rails 6 is on the horizon. If you’re interested in helping continue to support future versions of Rails while delivering exceptional value to our customers please see http://www.appfolioinc.com/jobs.
 

To Sleep, Perchance to Dream: Rails Ruby Bench and Sleepy GC

Hey, folks! It's been a few weeks since my last post about Rails Ruby Bench, so let's talk about some things you don't see it do, but it does behind the scenes! We'll also talk about an interesting new performance change that may be coming to Ruby 2.6.

That change is Eric Wong's Sleepy GC bug report and patch. With SleepyGC, Ruby will garbage collect with spare (idle) cycles. If you're just here for the latest Ruby development news, skip this post and click the bug report. Original sources are always more compete than commentary, right?

(Want to skip the narrative and go straight to the upshot? Skip to the bottom -- look for "So Did It Work?")

A Little Story

A few days ago, the excellent Sam Saffron of the Discourse team asked for my opinion on a pending Ruby speed patch. Yay! Rails Ruby Bench exists for this exact purpose: when a new Ruby patch comes out, I check how much it speeds up Rails (or slows it down.) And then you all know!

Of course, just lately I've been working on scaling out the benchmark itself, as my Ruby coordinator post suggests - right now, even if Discourse scales up just fine, my benchmark tops out at an EC2 m4.2xlarge instance. Above that I'm not configuring enough connections to Postgres, so I can't run enough threads and processes to use all that capacity. Working on it!

As a result, I haven't been constantly running RRB on the latest head of Ruby, because I've been working on other stuff. Which means my results are a bit out of date. Ruby also keeps getting faster, and the speedups keep getting smaller. This should make sense -- you've seen my "look, faster Ruby!" numbers, and it keeps getting harder to get large speedups after the last few hundred large speedups. Which means I need to crank up the number of requests per run and the number of runs per batch to keep pace. The Ruby core team are good at what they do! At the moment my "quick, rough check" numbers are 10,000 HTTP requests per run and 30 runs per batch (that's 200k HTTP requests) for reference, and that doesn't catch really small differences! And that still gets occasional outlier runs, so that's definitely not enough to check, "hey, does this sometimes cause random slowdowns?"

When I checked, things were a little broken. There was a slight speed regression for late March and early April Rubies, and a slightly older version (around May 1st) wouldn't run Rails Ruby Bench at all - the requests just didn't return.

So here's a "thanks for reading" takeaway for you -- don't run your production infrastructure on untested, non-release Rubies from random dates in the repository. ;-)

But here's another: the speed regression didn't last. Even when they're mostly testing on non-Rails code, mostly the Ruby core team do a great job keeping everything in a good shape - small problems tend to be caught and found rapidly, even in the long gaps between releases and previews.

Eventually I found a working Ruby, got a nice stable Rails Ruby Bench performance baseline just before the Sleepy GC patch, and ran a big batch of tests on Ruby 2.6.0 preview 1, the Ruby right before Sleepy GC, and Sleepy GC version 3 from Eric Wong's repository.

Wait, What's Sleepy GC?

Normally Garbage Collection (GC) runs when you've allocated a lot of memory, or when your process is running low and needs more. In other words, normally you reclaim old unused memory when you need memory -- and not before. You can manually run garbage collection before that in most languages (including Ruby) but that's not especially common.

It can be hard -- or impossible -- to avoid random pauses in your program if you use garbage collection. That's one reason that GC tuning is such a big deal in the JVM, for instance. Random pauses aren't necessarily a problem for every workload, but ask a game programmer about GC some time and you'll see what's wrong with them!

Ruby normally has "idle" times, such as when it's waiting for a file to be read, or network packets to arrive, or a database query. There can also be idleness from explicit sleeps or delays if the Ruby process is trying not to use more CPU than necessary. In all of these cases, it may make sense for the garbage collector to do some of its work in the idle time rather than making your program wait when you need memory.

Of course, if your Ruby process has lots of threads then you may already be filling this idle time with other work.

So Did It Work?

The short version is: the current Sleepy GC doesn't do anything for Rails Ruby Bench. If you think for a second, this should make sense - RRB runs a giant concurrent workload flat-out from startup until shutdown, overloaded with threads so that every CPU is running Ruby code constantly. There are no unfilled idle cycles. So Sleepy GC neither speeds up nor slows down RRB detectably -- which is a win, if it speeds up other workloads. Sam Saffron suggests it may do well for Unicorn servers, for instance. That makes sense - Unicorn runs one thread per process, so it may have lots more idle time than a heavily-multithreaded Puma workload like RRB. Sleepy GC may be useful, but RRB is a terrible way to find out one way or the other. That's fine. No benchmark shows you everything you care about, and it's important to know which is which.

While from my viewpoint, it was a great success! I have determined to my own satisfaction that there aren't lots of idle cycles for GC that I'm not capturing, so RRB did what it should have!

If you have a workload that you think may benefit from Sleepy GC, you can also try it out yourself. Sam Saffron says it helps certain Postgres workloads quite a lot, for instance. As of this writing, the latest branch is "git://80x24.org/ruby.git" on branch "sleepy-gc-v3". But read the bug report for the latest, always.

Ruby and Haskell: Culture is What You Don't Say

I'm working through a Haskell book with some friends. Learning something new is always good! But it's also because I write and teach Ruby. Learning from other communities helps me notice the cultural differences between, say, Haskell and Ruby.

I'm working with the excellent book "Haskell Programming from First Principles." It's far and away the best Haskell instruction I've found so far. It's easy to look at weirdness in bad instruction and say, "oh, this just isn't very good." But when you see things that seem weird in a first-class book like this, you're usually looking at a cultural difference.

Am I trashing Haskell? Or Haskell culture? Oh, heck no. I am really glad there are purists out there doing their thing. I'm thrilled to be learning from them. I'm very impressed with this Haskell book. Explaining unusual new concepts is hard.

But let's look at some differences between their culture and Ruby culture, shall we?

Judging Haskell by Its Cover

Our intrepid authors acknowledge that Haskell is known for being hard. To quote them:

There’s a wild rumor that goes around the internet from time to time about needing a Ph.D. in mathematics and an understanding of monads just to write “hello, world” in Haskell.
— Haskell Programming from First Principles (Allen and Moronuki)

Here's what's interesting about that: their entire first chapter is explaining the Lambda Calculus, before even talking about how to install the Haskell environment. Not just conceptual explanation, but in-depth math with work-it-out-for-yourself math exercises. They also say they strongly recommend not skipping it, and that (much) later chapters will make more sense if you know the math. They know that the math is intimidating to beginners. They respond by jumping very far into it, very rapidly.

Is that wrong? I don't think so. It's a very un-Ruby-ish cultural choice. Which is fine for a Haskell book, right? If you see something unfamiliar, mostly you need to not be intimidated by it in Haskell. If you need it spoon-fed, you're probably in the wrong place.

Ruby tries really hard to have a gentle learning curve. It doesn't always succeed, but it tries very hard, to the point of rewriting all sorts of things in Ruby, documenting and testing to a fault, and generally beckoning folks in with "look how familiar this looks!" It's not that one way or another is better. The Haskell method will give you a fearless community with a "ho-hum" attitude to code that looks scary. If that bugs you, the door is that-a-way. The Ruby method gives you a lot of beginners (yay!) who sometimes need and expect more hand-holding. We like our way, but I can't really say what we do is right and what they do is wrong. I can say that you wind up with very different groups as a result.

This is by far the simplest, most approachable guide to Haskell I have ever seen. They try really hard to not require lots of up-front math, compared to nearly anything else. One of the authors learned programming more-or-less for this book. And they still open with the lambda calculus before "here's how you install Haskell" or any code whatsoever. The entire current Haskell community has learned from this or from much less friendly sources.

Speaking in Math

Haskell is well-known as pretty math-heavy. That makes sense. Even in a book that is very intentionally not as "all math all the time," here's an example description from chapter 2:

When we talk about evaluating an expression, we’re talking about reducing the terms until the expression reaches its simplest form. Once a term has reached its simplest form, we say that it is irreducible or finished evaluating. Usually, we call this a value. Haskell uses a nonstrict evaluation (sometimes called “lazy evaluation”) strategy which defers evaluation of terms until they’re forced by other terms referring to them.

Values are irreducible, but applications of functions to arguments are reducible. Reducing an expression means evaluating the terms until you’re left with a value. As in the lambda calculus, application is evaluation: applying a function to an argument allows evaluation or reduction.
— Haskell Programming from First Principles

That doesn't exactly require you to already know the math. But I feel very confident saying that if you find math intimidating, you will find that explanation intimidating as well.

This is, again, a major departure from how the Ruby community does it. In other words, it's another way in which their community is intentionally different. This is another case of, "we're going to explain it simply, plainly and in our own vocabulary, which often happens to be the same as mathematical vocabulary. If you're not already with us, we hope you'll catch up later."

Later in chapter 2, they say, "your intuitions about precedence, associativity, and parenthesization from math classes will generally hold in Haskell." So when they talk (sincerely!) about how you don't have to know that much math, understand that they're talking to an audience for whom the phrase "your intuitions [...] from math classes" is reasonable and unremarkable.

So... Haskell Unreasonably Assumes You Already Know Everything?

You might reasonably and fairly ask me at this point, "are you saying that Ruby is easier and better at explaining everything, then?" Not so much. Ruby has a different set of unspoken assumptions.

For instance, Haskell From First Principles takes its sweet time explaining modular arithmetic, much more so than you'd expect from the rest of the book. It goes into detailed examples and hits a lot of corner cases explicitly in a way it doesn't for other operations. Modular arithmetic is certainly no harder than several things it skims over. Instead, modular arithmetic is less immediately familiar to most mathematicians than to programmers. A Ruby guide wouldn't usually call it out in such detail because historically, most Ruby programmers come from a language like C or Java that already has modulus built in, most frequently as the percent-sign operator.

In fact, the famous old free version of the Pickaxe Book for Ruby spends a lot of time waxing poetic about how Ruby has the excellence of two or three programming languages you presumably know (Perl, Python) plus one or two you mostly know by reputation (SmallTalk.) It isn't that Ruby makes no assumptions! Ruby's also okay with some quirkiness - have a look at Why's Poignant Guide to Ruby for an extreme-but-popular example.

I Didn't Say That! Though I'm Incomprehensible If You Don't Assume It.

One of the fastest ways to identify your culture is what you don't say. Haskell is fine if you're coming from math but don't know the "standard" C-descended-language idea of modulus, but very hard if you're not used to fairly abstract algebra. Ruby tutorials usually assume you've programmed in C or one of its descendants. They "know" you probably feel a little funky about Functional Programming and you probably don't have a math degree (even if you do -- I do!)

Neither one says this up front. They just say a lot of other things that casually assume them. If this "resonates" with you, it mostly means you're a match for their assumptions. Congratulations! It's always nice to find a community you fit in with. If it doesn't resonate with you, I have confidence you'll keep looking around until you find something that does. You seem resourceful that way.

Again, unstated assumptions aren't wrong. If you tried to state absolutely everything, you'd get another culture still, also with unstated assumptions (e.g. "we claim we have no unstated assumptions by virtue of cataloguing the obvious at great length - please pretend that completeness is possible in this universe.") Culture happens in the assumptions and what goes unsaid.

And the current cultures are neither right nor wrong. There may be some alternate Ruby universe where the founding Rubyists assume we all have math degrees, but we don't live in that one. Haskell could have come from a different group and speak in chemistry or biology analogies, but that's not where our world's Haskell community came from.

Can You Finish With a Moral Please?

It would be easy to tie this up with something smug on one side or the other. Nobody avoids having a preference about cultures, you know? It's easy to glibly say "Ruby is better because it's friendlier to novices" or "Haskell is better because it keeps the bar higher."

Try this as a moral, instead: don't just read and see if you get a good or a bad feeling. Listen to what gets said that makes you feel that way. Then, think about who it attracts or repels. Because culture isn't just in "learn this language!" books. It's in every part of the programming community - blog posts, Twitter, forums, talking in person.

Ruby has a very strong culture. If you're reading this, you're likely a part of it. There are problems coming, and storms to weather -- always, and as there always have been.

Don't just drink the culture around you. Learn to see it consciously, and learn to make it for yourself. Our local culture can use your help, and every culture needs more people who can see it consciously.