daveBlog

There is a madness to my method…

View My GitHub Profile

Follow me on Twitter

Come work with me!

Site feed

Enabling .jsx files in React Native

React Native is a really cool developing technology, but it forces some strange defaults on its users. One such choice is the refusal to support the .jsx extension out of the box. While Babel (which is also required by React Native) has no problem handling JSX syntax in .js files, many style guides (including AirBnB’s linter rules) require JSX syntax to be isolated to .jsx files.

Although the core React Native team doesn’t show any signs of reversing their stance, it is possible in recent versions to extend the list of permissible file extensions. I haven’t been able to find any mention of it in the official documentation, but React Native allows customizing several parts of the packager pipeline, including the list of acceptable file extensions for source files.

Configuration options are specified in much the same way as in webpack: you create a Javascript module the exports an object containing all of the fields that you want to override. It appears that there are two ways to provide this file to React Native, but one of them is a trap.

How Not To Do It

The React Native scripts provide a --config option, so you can supply any file as the configuration module. However, it has two major flaws.

The first is that the scripts take this value as provided on the commandline and pass it directly to a call to require deep in the guts of React Native. This means that your path must either be relative to wherever the require call resides (which is naturally subject to change), or it has to be an absolute path. Yuck.

The other limitation shows up when you try to run your app in the iOS simulator (and presumably the Android simulator, but I haven’t tested this). When you run yarn ios (or npm run ios) in a project that includes native components, the React Native script builds your application in one Terminal window while spawning another Terminal process to handle Javascript bundling—and it doesn’t forward your configuration file to the child location.

This means that if you want reliable configuration overrides, you have to rely on magic.

Magic

React Native’s configuration loader will search all of its ancestor directories for a file called rn-cli.config.js, and it will load the one closest to the root directory. This means that if you create a file with that name at your project root, it will be loaded and used automatically, no matter which React Native script you’re using. It also means that if you want to be really evil to one of your coworkers who left a machine unlocked, you can run touch /rn-cli.config.js to completely break any projects using a custom configuration.

So how do we use this knowledge to enable .jsx files? We just add this config:

Now you can use .jsx files with abandon!

What Happens in the Default Implementation of Stream.ReadAsync Will Shock You

tl;dr The base implementation of Stream.ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) doesn’t do what you might reasonably expect with respect to cancellation. Make sure you check whether your Stream subclass provides a proper implementation before relying on it.

A common pattern when cancelling a Task is to then wait for it to complete. This ensures that any unmanaged resources that the Task might be using are then safe to clean up:

cancellationTokenSource.Cancel();

try
{
  task.GetAwaiter().GetResult();
}
catch (OperationCanceledException)
{
}

// dispose of other stuff...

However, this pattern can lead to application hangs if the Task in question doesn’t do a good job of supporting cancellation, and the .NET Framework makes it surprisingly easy to fall into Pits of Failure. One such pit of failure is Stream.ReadAsync.

Background

I recently needed to build a system to communicate between a couple of processes running on the same machine. Eventually, I settled on using Windows’ named pipes, which are exposed in the .NET Framework as NamedPipeServerStream and NamedPipeClientStream.

For both ends of the pipe, I created a simple manager class that creates a Task that handles connecting/reconnecting to the pipe and listening for messages from the other end until canceled. In the Dispose method of the manager, I followed the pattern described above. In the Task, the CancellationToken was checked at each iteration, and it was also passed to all async calls. To a casual reader, the implementation appeared flawless.

The processes also both hung on Dispose if they ever succeeded in connecting to each other. Pausing the process in the debugger indicated that the hanging thread was waiting for the Task to finish, and no threads were currently associated with the Task.

Huh?

In terms of the Win32 API, a named pipe is mostly just a file: named pipe clients can even create them by calling CreateFile directly. This means that under the hood, they support nearly all of the same operations as files, including overlapped I/O and CancelIoEx.

If you take a look at .NET Core’s implementation, you’ll even find that the Windows-specific implementation of ReadAsync creates an instance of a subclass of TaskCompletionSource that uses these features. You could easily be forgiven for assuming that the .NET Framework would be implemented in exactly the same way.

But you’d be wrong.

Instead, the .NET Framework, as of .NET 4.7, provides no override of ReadAsync, which means that it delegates to Stream’s default implementation. This implementation checks the value of the CancellationToken at the beginning of the method; if it has not been triggered, it discards the token and delegates to a BeginRead/EndRead-based implementation of async I/O.

This is what caused my hang.

Workarounds

I’m not aware of a general-purpose workaround that will work for every subclass of Stream.

In the case of NamedPipeClientStream and NamedPipeServerStream, it is possible to Dispose the stream during an asynchronous read, which will have the effect of terminating the read. If you do this, you may also need to be prepared to catch an ObjectDisposedException when waiting for your Task to complete.

You could also use P/Invoke to drop down directly to the Win32 API to create a named pipe as a SafeFileHandle and pass it to the FileStream constructor: FileStream does override ReadAsync in terms of overlapped I/O and CancelIoEx, so you could call it without worrying about your Task becoming orphaned. On the other hand, you’d also have to continue dropping to the Win32 API to handle all of the connection-oriented logic.

In any case, if you’re writing code that deals with Streams and needs to handle cancellation, make sure your subclass properly supports it!

MonkeySpace Talk Slides and Notes

I delivered my first conference talk at MonkeySpace yesterday. The video will be available at some point, but in the meantime, here are the slides and some notes to help make sense of them.

Overview

Mono is a potentially attractive option for organizations that are already using ASP.NET because IIS is a pain to configure and Windows licenses can get really expensive.

Many libraries indicate that they support Mono, but there is little information about using it for web sites and services beyond basic, “getting started” tutorials. I’m trying to document and publish my experiments as I go along to help others who might be trying this and in the hope that more people will start documenting their experiences, too.

The major snags I encountered were:

  • xbuild doesn’t have a good fallback mechanism for unkown profiles (we use the Client Profile for some assemblies)
  • Microsoft’s format for Forms Authentication cookies is undocumented and difficult to reverse engineer. (see my earlier post on this topic)
  • Mono’s support for SQL Server hasn’t received significant updates for 2008 and 2012.
  • New Relic—an awesome monitoring service we use—doesn’t support Mono.

I worked around all of these issues and even built my own version of the New Relic Profiler (linked below). In testing a simple service, I encountered very inconsistent performance that was quite sensitive to GC pauses.

There was a period of about seven minutes during which the performance of the Mono version of the service was as good as—or better than—its Windows counterpart, so I plan to continue investigating. These are some initial areas I intend to look into:

  • Is the New Relic profiler itself causing noise?
  • Does Mono 3.2 solve this issue? (I tested with one of the 3.0.x releases)
  • Is there something about the application’s architecture that causes difficulty for sgen?
  • Would a stack other than ASP.NET MVC give me better throughput?

Image Credits

Several of the images that I used were licensed under Creative Commons. Here are links to their original locations:

HTTP 100 Continue, Latency, and You

TL;DR

You probably shouldn’t send an Expect: 100-continue header in your HTTP requests—especially if you’re making requests against a server running IIS or Nginx. If you’re using .NET, you have to explicitly opt out of sending this header by setting Expect100Continue to false on the ServicePoint of any HttpWebRequest that you create.

Background

In a recent project, I found myself digging into the details surrounding the 100 Continue status in HTTP 1.1. The project involves uploading (potentially) large files to the server, and it initially seemed like explicitly supporting 100 Continue would be a good way to preserve bandwidth.

The Problem

First off, there’s surprisingly little reliable documentation about this status code beyond RFC 2616 itself. IIS automatically sends a 100 Continue for any request containing an Expect: 100-Continue header, and it wasn’t clear from my research whether there is actually a way to disable this—at the very least, it’s not possible from within the context of a plain old ASP.NET application.

As I investigated further, I discovered that it’s actually somewhat common for servers to handle this without allowing application code to have a say in what happens. Nginx’s reverse proxy module behaves this way, and projects like Gunicorn even depend on this behavior in order to prevent a certain class of DOS attack.

Normally, this would be the end of my investigation: what initially seemed like a cool feature would be a pain to implement, and after reading up on it more, it really seemed to be of dubious value. But then I started reading up on related parts of the .NET framework…

HttpWebRequest

It seems that the designers of the .NET framework weren’t allowed to talk to the developers of IIS. If they had, the .NET framework designers might have noticed that sending an Expect: 100-Continue header to servers that always respond with 100 Continue serves absolutely no purpose. In fact, the current behavior is worse than useless because it introduces unnecessary latency any time that you call out to an external web service. For an application that makes an occasional web request, this might not be so bad, but inside of a web service with cross-service dependencies, or on a mobile device with naturally high latency, this is a big deal.

Fortunately, this header can be disabled: every HttpWebRequest has a ServicePoint with an Expect100Continue property. If you set this to false, you’ll save yourself an unnecessary round trip. Better yet: write yourself a little utility method that creates an HttpWebRequest and disables the header, then flame anyone who doesn’t use it.

Happy Hacking!

Decoding Forms Authentication Cookies With Mono

Lately, I’ve been experimenting with migrating web services from .NET on Windows to Mono on Ubuntu. While getting the code building and running in a VM was not terribly difficult, I soon found that authentication was a roadblock.

Our web services run across several subdomains. We’re moving towards using OAuth for everything, but many services still use Forms Authentication with a cookie that is shared across all subdomains. However, when I tried to use my cookie with a service running on Mono, I was treated as an unauthenticated user.

The first barrier to shared authentication between .NET and Mono is actually well-documented: the default cookie name under Mono is .MONOAUTH, rather than .ASPXAUTH. Upon discovering this, I quickly modified my Web.config to explicitly set the cookie to .ASPXAUTH…and nothing happened.

Another early discovery was that Mono base-64 encodes the cookie, while .NET uses base-16. Again, changing this (in my case, by using a custom build of Mono) had no effect—I was still treated as an unauthenticated user.

Eventually, after quite a bit of digging around, I learned that the binary format for .NET’s Forms Authentication cookie is undocumented, so Mono had to implement a reasonable—yet incompatible—alternative format. Furthermore, if you choose to encrypt the cookie, .NET adds an extra 28 bytes of padding to the cookie using an unspecified hash function—my best guess is that this has something to do with avoiding hash collision attacks.

In any case, I spent a lazy Saturday afternoon poking around with the format and arrived at a method of decoding the cookie, which is included here for your reading pleasure:

As you can see, it’s very proof-of-concepty, and it assumes specific encryption and signing algorithms, but it gets the job done and should be pretty easy to extend and generalize. Just drop this code into your Global.asax.cs, provide your favorite implementation of base-16 string to bytes, and you’re good to go.

Happy hacking!