Re-approaching Color

Kevyn Arnott
Lyft Design+
Published in
9 min readSep 21, 2018

--

Sharing a new way to building color systems for accessible UIs that scale. Build your own with ColorBox.io

Color is instrumental in how we perceive the world, and that could not be more true within interfaces. Color guides us to predict, understand, and make decisions.

Ever since you were little, you’ve been trained to pause at words in red and relax at numbers in green. You want to read text in black before text in gray. You click on things in blue. It’s so deeply embedded into how we experience products that we never give it another thought.

Color, at least on the surface, appears almost naively simple, yet as it scales across larger products it becomes unbelievably complex. You have thousands of people building products all at once, and those products are all heavily reliant on color. This puts a lot of pressure on the color system to ensure that all the products are being created consistently, but very hard to implement since it’s all too easy to apply colors on a one-off basis.

Earlier in the year, my team was getting swamped with questions about how to use color, how to add colors, how to modify colors, and how to work with accessibility. And the number of open questions was only getting worse as our design and engineering teams both grew. My team is responsible for building and maintaining our design system, the Lyft Product Language.

Our existing color system wasn’t working, and we realized for us to be successful with color we would need to do something extraordinarily different. To do that, we had to start from the beginning and scrap everything we knew about color systems.

Naming colors

While auditing the colors we use at Lyft, we found the remains of many previous color system attempts. It felt a bit like an archeological dig as color names like moon, slate, and bone all pointed to similar shades of gray, and other color names like mulberry, purple, and violet pointed to similar shades of purple. This was a big problem for us with especially when it came to our iconic Lyft pink. We found 15 variations of pink across our products.

When we began, we wanted to truly understand why fragmentation was happening. Once we paid attention to how people, especially in different disciplines, described colors to one another we noticed an organic form of deviations occurring in the way they spoke about colors.

It feels obvious now, but it wasn’t at the time. In order for us to truly be successful, we needed to have designers and engineers speaking a common vocabulary around color. What that meant was designers and engineers could both point to a color and say the same color name, but also the opposite if given a color name both designers and engineers could reasonably guess what it would be. In essence, we needed a language to support color.

We noticed when people talk about color there are two fundamental pieces being communicated. There was the hue of the color, which pointed to a space on the color wheel, and there was the modifier that pointed to the degree of lightness or darkness of that color. So you’d commonly hear light blue, dark green, deep red, etc.

We knew that building a language for color would be trivial compared to getting people to actually use it day-to-day. So we needed to nail the cost-to-value ratio, meaning the system must be simple to learn and highly efficient.

To make sure it was simple, we leveraged people’s existing color knowledge, so we adopted terms like blue, green, and red to describe hue. For more complicated colors, we preferred shorter, easily spelled words that were fairly easy to learn like mint, teal, rose, etc. We also needed to factor in that the language could last for quite some time, so we chose to plan for every color conceivably possible, and then map them to a hue range. Since there are only 360 hues, we wouldn’t need to change this even if Lyft were to change brand colors. If we ever need to add support for a new hue in the future, we would combine hue names so that we don’t need to modify the existing language infrastructure (e.g. red-sunset or pink-purple).

For lightness and darkness, there wasn’t a great naming system already in place. We chose to go with a simple scale from 0 to 100, With 0 being the lightest possible color and 100 being the darkest possible color. The only caveat is we needed to be intentional in how we mapped out colors from 0 to 100. If we needed to retroactively add a shade, for example, that could cause us to redo all the numbers and require us to reteach everyone in the organization. So we needed to be correct in how we assigned colors to the scale. We’ll get back to how we did this later on.

By equipping people with those two simple things, hue and brightness names, we had our language. So now if you hear red 60, you’ll know it’s a medium red, and if you see a really light shade of blue, you can guess it’s blue 10. The language is quick to learn and reasonably precise.

Selecting colors

When we look at color systems, nearly all of them are similar in that a designer will choose colors by opening up a tool like Illustrator, Photoshop, or Sketch, pick a color, and using some overlay method to achieve light & dark variants. It’s a classic method that is ubiquitous in today’s systems.

While that method works in the short-term, it tends to build an expiration date to any color system. For example, if we need to modify the color set later we have no assurance it will be done in the same manner. Even if the same designer were to do it there’s a good chance it will be done differently, for good reason. That’s because color is purely reliant on our perception from our eyeballs and any number of changes to our diet, environment, tools, or methods can drastically change the outcome. Because this method can’t support our long-term vision for color, we had to find another method.

We turned to math and looked at ways we could programmatically generate color sets. However, we weren’t happy with the results existing programmatic methods provided. While we were able to get decent results for a specific hue, when we applied the same method to other hues the results were lackluster.

Blue & yellow hues using the same method. The blue set provides us quite a few usable shades where as the yellow set becomes unusable early on.

We realized that this is caused by the great variance in color spaces, and the common programmatic approaches only allowed us to progress through color spaces in a singular way.

We see far more yellows and greens than we do blues or reds.

We saw this as an opportunity to explore what else we could do with math. So we asked ourselves what we needed from a color set, and the answer was control. In interfaces, we don’t need an even distribution of light and dark shades. We need pockets of concentration in light and dark shades with only a few middle shades. We also need to have more control in how colors progress in hue and saturation.

When we looked at conventional programmatic color generation tools, nearly all of them progressed at a fixed rate with an even spread of light and dark colors, with minor if any, changes to saturation and hue. So we decided to build our own algorithm to generate colors.

WARNING: This is about to get technical, so if you’d like to learn how the algorithm works read on. If you’re not super interested, skip onto Accessibility :).

Color is best represented in 3D because there’s three dimensions to it, hue (bottom), saturation (right), and luminosity or value (left).

In this box, we are representing every possible color.

Like any algorithm, we need to provide a set of inputs to produce a set of outputs. This algorithm is built to produce a color set meaning the output will be a set of shades for one hue. To build an entire color system, we’d need to repeat the algorithm for each hue.

For the inputs, we need to tell the algorithm how many shades or steps we want to produce in this color set. At Lyft, we have 11 steps in each of our color sets, which means we have a color 0, color 10, color 20 … all the way up to color 100.

Once we’ve set the number of steps, we need to provide the hue range, so we need to provide a start value of 0–359 and an end value of 0–359. Depending on the hue, it’s sometimes a small range, like red, or a large range for yellows. Hue ranges can ascend or descend based on preference.

Once we have our hues and steps, we need to tell the algorithm about the saturation range. Saturation goes from 0 to 1. In our algorithm, we added the ability to manipulate the rate of saturation. This allows a color set to accelerate its progression across the saturation channel to reach fuller saturation sooner for branded colors or reduce the saturation to achieve gray-scale colors.

The last set of inputs for us to provide is the luminosity values. Luminosity works similarly to saturation in that it’s a 0–1 value. You may notice that the dots on other axises also move when we adjust luminosity. That’s because color is in a 3D environment, so all the color channels are closely connected.

Once we have all the inputs, the algorithm produces a color set.

This algorithm allows us to remove all the dependencies we previously had with color selection, so if we have a new designer working on this or we change tools or monitors, we’ll still have the same outcome. This algorithm also enables us to quickly modify or scale color as we need to over time.

Accessibility

We made accessibility a cornerstone of our new color system. We wanted to remove the need to manually check color contrast using third-party tools, and we needed to make it dead-simple for everyone to create accessible products.

To solve this, we leveraged what we had already done with color naming and selection. Using our algorithm, we made our color lightness-to-darkness consistent across color hues, so that every color 0–50 is accessible (4.5:1) on black, and every color 60–100 is accessible (4.5:1) on white.

Now seeing a color in code or verbally saying the color provided enough information to determine whether that color was accessible. For example, a designer or engineer can read Red 50 and know it’s not accessible, but read Red 60 and know it is accessible.

Tooling

Finally, we changed how we supported color. Our previous color systems had very little tooling to prevent deviation and help maintain our system for both engineers and designers.

For engineers, we built was a migration tool that went through our code base and migrated all of the existing colors over to our new color system. We also built linters to prevent new colors from being introduced to the codebase.

For designers, we built a Sketch plugin that overrides the color inspector. This puts our new colors right in the designer’s workflow.

Now it’s in your turn

At Lyft, we believe in an inclusive future where anyone can pick up a product and be successful. We feel that in order for an inclusive future to happen we all need to be thinking about and building accessible products.

So today, we’ve shared our learnings about working with color, and we are open-sourcing our color algorithm. To ensure that every team that’s interested in using this color algorithm is able to, we’ve packaged it together as a web tool that we call ColorBox.

Enjoy!

3D Graphics by Han Han Xue, Colorbox Logo by Nick Slater.

Special thanks to Linda Dong, Sam Soffes, Kathy Ma, Katie Dill and the entire Core Design team!

Interested in joining the Lyft Design team? We’re hiring.

--

--