Mandy's Tech Blog

View My GitHub Profile

Follow me on Twitter

Come work with me!

Site feed

A surprising behavior of React's useEffect

Since React 16.8 came out earlier this week, I’ve been playing around with hooks in personal projects so that I can better understand their ins and outs.

Many of the hooks that accept callbacks also take an optional array of values that are used for optimization purposes: if none of the values in the array change between renders, the callback is skipped.

The docs contain a tantalizing hint about the future:

The array of inputs is not passed as arguments to the effect function. Conceptually, though, that’s what they represent: every value referenced inside the effect function should also appear in the inputs array. In the future, a sufficiently advanced compiler could create this array automatically.

Until “the future” comes, determining which values should appear in the inputs array seems like a really tedious (and error prone!) code review task, so I tried writing a utility method to remove some of the “sharp edges”:

With this utility method, the values are passed to the callback. If the callback is defined outside of the component, then it becomes impossible for it to use any unlisted values, making code review much simpler.

Unfortunately, it has a critical flaw when used with refs.

Suppose I have a component that needs to attach an event handler directly to a DOM node:

Aside: that this is more than just a hypothetical example: React attaches all event handlers at the document level. One unfortunate consequence of this is that Chrome will force any touch event handlers to be passive. This is undesirable for things like drawing controls which may need to call preventDefault to stop undesirable scrolling.

As written, this effect will never be attached to the canvas element unless an ancestor of CanvasElement triggers a re-render.

This is the case because the value provided to addTouchListener gets bound at render time: before canvasRef.current is set to an actual canvas element. It’s not until the second render–when canvasRef.current is set to the value from the first render–that addTouchListener is called with a DOM element.

This leads to another interesting discovery. Suppose we abandon useBoundEffect so that we can pass the correct value to addTouchListener:

In this version, the effect attaches the event handler after the first render, as desired, but it also cleans up and re-attaches on the second render. This happens for the same reason that the event handler was not attached the first time: the initial value of canvasRef.current is what is used to determine whether the effect needs to be re-run.

React itself could provide a solution to this by allowing the second argument to be specified as a function that returns an array, instead of the array itself. If React called this function immediately before applying the effect, it would enable a much more precise calculation of whether it needs to be re-applied.

On the other hand, I’m not keen to wait on such a feature, so I decided to see if I could emulate it. This is what I came up with:

In this version, we have to use two effects. The first runs on every render to determine whether the inputs have changed. The second is necessary to clean up when the component is unmounted: it runs only on the first render and sets up a cleanup method.

Aside #2: if you try to use this in your code and supply the second argument in the form () => [value1, value2], you may get an error from the Typescript compiler complaining that the return type is not compatible with the expected type. This happens because Typescript is really conservative about inferring arrays as tuples. If you want to avoid adding a bunch of explicit types everywhere, you can define a method like this to give the compiler the necessary hint:

With this in place, the second argument would change to the form () => createTuple(value1, value2). I’m hopeful that future versions of Typescript will make this simpler, as there are a number of GitHub Issues related to tuple inference; on the other hand, there are no concrete proposals, so I’m not holding my breath.

Send Cookies to a WKWebView With This One Weird Trick

Back in iOS 8, Apple introduced a new control for hosting web content: WKWebView. In most respects, it’s far superior to the old UIWebView: it’s faster, has fewer rendering issues, and provides better Javascript support.

However, one area in which WKWebView is demonstrably worse is in its handling of cookies. iOS provides a service called NSHTTPCookieStorage that is used by most HTTP-related functionality, including UIWebView—but not WKWebView.

The reason for this is documented in a long-lived webkit bug. In WKWebView, most of the underlying functionality actually lives in a process outside of your application. Among other benefits, this policy allows Apple to use a JavaScript engine that produces executable code at runtime without giving all processes this privilege. Unfortunately, this complicates interactions with things that do live in your application—like cookie stores. While this issue has been addressed on master, it’s not yet in a shipping version of iOS. (it looks like iOS 11 may provide a way to interact with WKWebView’s cookie store)

For apps that host web applications that depend on cookies for authentication, this poses a major problem. Although web requests originating in the application can access the shared cookie store, these cookies are never propagated to the web view, meaning that subsequent AJAX requests or page navigations will be unauthenticated.

The most popular solution on the bathroom wall appears to be injecting extra Javascript into each page to programmatically set the cookie. While this works, it requires dynamically generating Javascript, which is kind of gross.

If you have control of the server hosting your web content, there’s another way to slip your cookies into a WKWebView: just get the server to send back a Set-Cookie header in its response to your initial navigation request. In the case of an app that I was recently working on, we added a small bit of code on the server looking for an X-Echo-Cookie HTTP header. For any request containing this header, it would include a Set-Cookie header whose content was whatever was in the request’s Cookie header.

By including this header on our initial navigation request, we were able to get the cookie to “jump” from the application’s cookie store to WKWebView’s. While this solution doesn’t cover complex scenarios, like the cookie being updated or invalidated while the user is navigating, it worked just fine for our scenario.

Enabling .jsx files in React Native

React Native is a really cool developing technology, but it forces some strange defaults on its users. One such choice is the refusal to support the .jsx extension out of the box. While Babel (which is also required by React Native) has no problem handling JSX syntax in .js files, many style guides (including AirBnB’s linter rules) require JSX syntax to be isolated to .jsx files.

Although the core React Native team doesn’t show any signs of reversing their stance, it is possible in recent versions to extend the list of permissible file extensions. I haven’t been able to find any mention of it in the official documentation, but React Native allows customizing several parts of the packager pipeline, including the list of acceptable file extensions for source files.

Configuration options are specified in much the same way as in webpack: you create a Javascript module the exports an object containing all of the fields that you want to override. It appears that there are two ways to provide this file to React Native, but one of them is a trap.

How Not To Do It

The React Native scripts provide a --config option, so you can supply any file as the configuration module. However, it has two major flaws.

The first is that the scripts take this value as provided on the commandline and pass it directly to a call to require deep in the guts of React Native. This means that your path must either be relative to wherever the require call resides (which is naturally subject to change), or it has to be an absolute path. Yuck.

The other limitation shows up when you try to run your app in the iOS simulator (and presumably the Android simulator, but I haven’t tested this). When you run yarn ios (or npm run ios) in a project that includes native components, the React Native script builds your application in one Terminal window while spawning another Terminal process to handle Javascript bundling—and it doesn’t forward your configuration file to the child location.

This means that if you want reliable configuration overrides, you have to rely on magic.

Magic

React Native’s configuration loader will search all of its ancestor directories for a file called rn-cli.config.js, and it will load the one closest to the root directory. This means that if you create a file with that name at your project root, it will be loaded and used automatically, no matter which React Native script you’re using. It also means that if you want to be really evil to one of your coworkers who left a machine unlocked, you can run touch /rn-cli.config.js to completely break any projects using a custom configuration.

So how do we use this knowledge to enable .jsx files? We just add this config:

Now you can use .jsx files with abandon!

What Happens in the Default Implementation of Stream.ReadAsync Will Shock You

tl;dr The base implementation of Stream.ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken) doesn’t do what you might reasonably expect with respect to cancellation. Make sure you check whether your Stream subclass provides a proper implementation before relying on it.

A common pattern when cancelling a Task is to then wait for it to complete. This ensures that any unmanaged resources that the Task might be using are then safe to clean up:

cancellationTokenSource.Cancel();

try
{
  task.GetAwaiter().GetResult();
}
catch (OperationCanceledException)
{
}

// dispose of other stuff...

However, this pattern can lead to application hangs if the Task in question doesn’t do a good job of supporting cancellation, and the .NET Framework makes it surprisingly easy to fall into Pits of Failure. One such pit of failure is Stream.ReadAsync.

Background

I recently needed to build a system to communicate between a couple of processes running on the same machine. Eventually, I settled on using Windows’ named pipes, which are exposed in the .NET Framework as NamedPipeServerStream and NamedPipeClientStream.

For both ends of the pipe, I created a simple manager class that creates a Task that handles connecting/reconnecting to the pipe and listening for messages from the other end until canceled. In the Dispose method of the manager, I followed the pattern described above. In the Task, the CancellationToken was checked at each iteration, and it was also passed to all async calls. To a casual reader, the implementation appeared flawless.

The processes also both hung on Dispose if they ever succeeded in connecting to each other. Pausing the process in the debugger indicated that the hanging thread was waiting for the Task to finish, and no threads were currently associated with the Task.

Huh?

In terms of the Win32 API, a named pipe is mostly just a file: named pipe clients can even create them by calling CreateFile directly. This means that under the hood, they support nearly all of the same operations as files, including overlapped I/O and CancelIoEx.

If you take a look at .NET Core’s implementation, you’ll even find that the Windows-specific implementation of ReadAsync creates an instance of a subclass of TaskCompletionSource that uses these features. You could easily be forgiven for assuming that the .NET Framework would be implemented in exactly the same way.

But you’d be wrong.

Instead, the .NET Framework, as of .NET 4.7, provides no override of ReadAsync, which means that it delegates to Stream’s default implementation. This implementation checks the value of the CancellationToken at the beginning of the method; if it has not been triggered, it discards the token and delegates to a BeginRead/EndRead-based implementation of async I/O.

This is what caused my hang.

Workarounds

I’m not aware of a general-purpose workaround that will work for every subclass of Stream.

In the case of NamedPipeClientStream and NamedPipeServerStream, it is possible to Dispose the stream during an asynchronous read, which will have the effect of terminating the read. If you do this, you may also need to be prepared to catch an ObjectDisposedException when waiting for your Task to complete.

You could also use P/Invoke to drop down directly to the Win32 API to create a named pipe as a SafeFileHandle and pass it to the FileStream constructor: FileStream does override ReadAsync in terms of overlapped I/O and CancelIoEx, so you could call it without worrying about your Task becoming orphaned. On the other hand, you’d also have to continue dropping to the Win32 API to handle all of the connection-oriented logic.

In any case, if you’re writing code that deals with Streams and needs to handle cancellation, make sure your subclass properly supports it!

MonkeySpace Talk Slides and Notes

I delivered my first conference talk at MonkeySpace yesterday. The video will be available at some point, but in the meantime, here are the slides and some notes to help make sense of them.

Overview

Mono is a potentially attractive option for organizations that are already using ASP.NET because IIS is a pain to configure and Windows licenses can get really expensive.

Many libraries indicate that they support Mono, but there is little information about using it for web sites and services beyond basic, “getting started” tutorials. I’m trying to document and publish my experiments as I go along to help others who might be trying this and in the hope that more people will start documenting their experiences, too.

The major snags I encountered were:

  • xbuild doesn’t have a good fallback mechanism for unkown profiles (we use the Client Profile for some assemblies)
  • Microsoft’s format for Forms Authentication cookies is undocumented and difficult to reverse engineer. (see my earlier post on this topic)
  • Mono’s support for SQL Server hasn’t received significant updates for 2008 and 2012.
  • New Relic—an awesome monitoring service we use—doesn’t support Mono.

I worked around all of these issues and even built my own version of the New Relic Profiler (linked below). In testing a simple service, I encountered very inconsistent performance that was quite sensitive to GC pauses.

There was a period of about seven minutes during which the performance of the Mono version of the service was as good as—or better than—its Windows counterpart, so I plan to continue investigating. These are some initial areas I intend to look into:

  • Is the New Relic profiler itself causing noise?
  • Does Mono 3.2 solve this issue? (I tested with one of the 3.0.x releases)
  • Is there something about the application’s architecture that causes difficulty for sgen?
  • Would a stack other than ASP.NET MVC give me better throughput?

Image Credits

Several of the images that I used were licensed under Creative Commons. Here are links to their original locations: