You probably shouldn’t send an Expect: 100-continue
header in your HTTP
requests—especially if you’re making requests against a server running
IIS or Nginx. If you’re using .NET, you have to explicitly opt out of
sending this header by setting Expect100Continue
to false
on the
ServicePoint
of any HttpWebRequest
that you create.
In a recent project, I found myself digging into the details surrounding
the 100 Continue
status
in HTTP 1.1. The project involves uploading (potentially) large files to
the server, and it initially seemed like explicitly supporting 100
Continue
would be a good way to preserve bandwidth.
First off, there’s surprisingly little reliable documentation about this status
code beyond RFC 2616 itself. IIS automatically sends a 100 Continue
for
any request containing an Expect: 100-Continue
header, and it wasn’t
clear from my research whether there is actually a way to disable
this—at the very least, it’s not possible from within the context of a
plain old ASP.NET application.
As I investigated further, I discovered that it’s actually somewhat common for servers to handle this without allowing application code to have a say in what happens. Nginx’s reverse proxy module behaves this way, and projects like Gunicorn even depend on this behavior in order to prevent a certain class of DOS attack.
Normally, this would be the end of my investigation: what initially seemed like a cool feature would be a pain to implement, and after reading up on it more, it really seemed to be of dubious value. But then I started reading up on related parts of the .NET framework…
It seems that the designers of the .NET framework weren’t allowed to talk to the
developers of IIS. If they had, the .NET framework designers might have
noticed that sending an Expect: 100-Continue
header to servers that
always respond with 100 Continue
serves absolutely no purpose. In
fact, the current behavior is worse than useless because it
introduces unnecessary latency any time that you call out to an external
web service. For an application that makes an occasional web request,
this might not be so bad, but inside of a web service with cross-service
dependencies, or on a mobile device with naturally high latency, this is
a big deal.
Fortunately, this header can be
disabled: every HttpWebRequest
has a
ServicePoint
with an Expect100Continue
property. If you set this to
false
, you’ll save yourself an unnecessary round trip. Better yet:
write yourself a little utility method that creates an HttpWebRequest
and disables the header, then flame anyone who doesn’t use it.
Happy Hacking!
Lately, I’ve been experimenting with migrating web services from .NET on Windows to Mono on Ubuntu. While getting the code building and running in a VM was not terribly difficult, I soon found that authentication was a roadblock.
Our web services run across several subdomains. We’re moving towards using OAuth for everything, but many services still use Forms Authentication with a cookie that is shared across all subdomains. However, when I tried to use my cookie with a service running on Mono, I was treated as an unauthenticated user.
The first barrier to shared authentication between .NET and Mono is
actually
well-documented:
the default cookie name under Mono is .MONOAUTH
, rather than
.ASPXAUTH
.
Upon discovering this, I quickly modified my Web.config to explicitly
set the cookie to .ASPXAUTH
…and nothing happened.
Another early discovery was that Mono base-64 encodes the cookie, while .NET uses base-16. Again, changing this (in my case, by using a custom build of Mono) had no effect—I was still treated as an unauthenticated user.
Eventually, after quite a bit of digging around, I learned that the binary format for .NET’s Forms Authentication cookie is undocumented, so Mono had to implement a reasonable—yet incompatible—alternative format. Furthermore, if you choose to encrypt the cookie, .NET adds an extra 28 bytes of padding to the cookie using an unspecified hash function—my best guess is that this has something to do with avoiding hash collision attacks.
In any case, I spent a lazy Saturday afternoon poking around with the format and arrived at a method of decoding the cookie, which is included here for your reading pleasure:
As you can see, it’s very proof-of-concepty, and it assumes specific encryption and signing algorithms, but it gets the job done and should be pretty easy to extend and generalize. Just drop this code into your Global.asax.cs, provide your favorite implementation of base-16 string to bytes, and you’re good to go.
Happy hacking!
Quite a bit has changed since I wrote my post a few years ago on building Mono for OS X. So much has changed, in fact, that I've decided that a new post on the topic is in order. These instructions assume that you're using Snow Leopard. They should work on Leopard with minimal changes, but your mileage may vary.
Installing the prerequisites for Mono is much simpler than it once was. Previously, Mono had external dependencies like glib, which would mean you'd need to install MacPorts, but they've taken steps in the last year to reduce and eliminate such dependencies.
If you haven't yet heard about git, stop hiding under that rock! The Mono source is hosted on GitHub, so if you want to download and build it, you'll need a copy of Git. You can acquire it here.
XCode 4 is now a $4.99 download in the Mac app store. If you're too cheap for that, you can still (as of the time of this writing) grab XCode 3 here (free registration required). A copy may also be on one of the discs that came with your Mac.
While you can (theoretically) bootstrap Mono, it's not something I recommend unless you enjoy pain. Save yourself some trouble and download the latest version here.
The Mono source code is now hosted on GitHub. Open your Terminal and clone the repository by issuing the command
git clone git://github.com/mono/mono.git
First, you'll need to run autogen.sh
. Here's how I usually invoke it:
./autogen.sh --with-sgen=no --with-xen_opt=no --prefix=/opt/mono-`git rev-parse HEAD` --with-mcs-docs=no --disable-nls
My rationale for each option is listed below.
--with-sgen=no
: The Mono team considers the SGen garbage collector to be production ready, but I've had problems with it, so I still turn it off.--with-xen_opt=no
: I'm not running Xen, so there's no need for this.--prefix=/opt/mono-`git rev-parse HEAD`
: You may want to just use --prefix=/opt/mono
. I include the git revision in the path to make it easier to keep a number of parallel installations around for testing purposes.--with-mcs-docs=no
: The documentation takes a while to build and generally won't be much different than what's in your installed copy of Mono. Save yourself some time and leave it out.--disable-nls
: I don't need this particular feature. If you do, leave it in.Assuming that autogen.sh
completes successfully, you can now build it with
make -j1
Depending upon how your development environment is set up, you may not need the -j1
. However, if your default MAKEFLAGS
has a different -j
option, you'll want to use this. Otherwise, the Mono build can fail in strange ways. Once this is done, run sudo make install
, and you'll be all set!
I've found this page useful on a few occasions, so I'm linking it from here so that I don't forget where it is.
The author demonstrates how to use git-filter-branch to completely remove files from a git repository. I haven't felt compelled to take all the steps that he does in order to get rid of the files, but it's an excellent reference for the command, nonetheless.
This is probably one of the most basic concepts for experienced Rails developers, but I had a hard time finding what I was looking for on Google, which means that other people are having a hard time too, so here's my first little contribution to the Rails community…
I recently moved from SubText to a tiny little (incomplete!) blog engine that I wrote for myself in Rails. While working on it, I was also reading a little bit about SEO; one of the concepts expressed was that each url on your site should have something to do with the content of the page to which it refers. While Rails has good defaults for a lot of things, a url like http://example.com/posts/123 certainly leaves room for improvement.
While I knew that I could accomplish the affect that I wanted by creating a bunch of custom routes in routes.rb, I wanted to be able to take advantage of map.resource
and methods such as link_to
, which seemed to insist upon using the record ID of my ActiveRecord model.
Until I found to_param
.
to_param
, as it turns out, is a very simple method that you can override in any ActiveRecord model. Whatever value you return from this method will be used in any urls generated by methods like link_to
. The one caveat is that after you do this, you must change the code for any of your controllers so that they look up records based on this new external ID. In many cases, this may be as simple as changing MyController.find(params[:id])
to MyController.find_by_name(params[:id])
.